00:00:00.001 Started by upstream project "autotest-per-patch" build number 122821 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.046 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.048 The recommended git tool is: git 00:00:00.049 using credential 00000000-0000-0000-0000-000000000002 00:00:00.059 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.084 Fetching changes from the remote Git repository 00:00:00.086 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.121 Using shallow fetch with depth 1 00:00:00.121 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.121 > git --version # timeout=10 00:00:00.148 > git --version # 'git version 2.39.2' 00:00:00.148 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.149 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.149 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.900 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.912 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.923 Checking out Revision 10da8f6d99838e411e4e94523ded0bfebf3e7100 (FETCH_HEAD) 00:00:04.923 > git config core.sparsecheckout # timeout=10 00:00:04.934 > git read-tree -mu HEAD # timeout=10 00:00:04.950 > git checkout -f 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=5 00:00:04.968 Commit message: "scripts/create_git_mirror: Update path to xnvme submodule" 00:00:04.969 > git rev-list --no-walk 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=10 00:00:05.053 [Pipeline] Start of Pipeline 00:00:05.068 [Pipeline] library 00:00:05.070 Loading library shm_lib@master 00:00:05.070 Library shm_lib@master is cached. Copying from home. 00:00:05.089 [Pipeline] node 00:00:05.095 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.100 [Pipeline] { 00:00:05.112 [Pipeline] catchError 00:00:05.114 [Pipeline] { 00:00:05.130 [Pipeline] wrap 00:00:05.141 [Pipeline] { 00:00:05.149 [Pipeline] stage 00:00:05.151 [Pipeline] { (Prologue) 00:00:05.333 [Pipeline] sh 00:00:05.612 + logger -p user.info -t JENKINS-CI 00:00:05.631 [Pipeline] echo 00:00:05.633 Node: GP11 00:00:05.639 [Pipeline] sh 00:00:05.960 [Pipeline] setCustomBuildProperty 00:00:05.997 [Pipeline] echo 00:00:06.006 Cleanup processes 00:00:06.013 [Pipeline] sh 00:00:06.293 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.293 1035215 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.307 [Pipeline] sh 00:00:06.588 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.588 ++ grep -v 'sudo pgrep' 00:00:06.588 ++ awk '{print $1}' 00:00:06.588 + sudo kill -9 00:00:06.588 + true 00:00:06.603 [Pipeline] cleanWs 00:00:06.611 [WS-CLEANUP] Deleting project workspace... 00:00:06.612 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.618 [WS-CLEANUP] done 00:00:06.621 [Pipeline] setCustomBuildProperty 00:00:06.632 [Pipeline] sh 00:00:06.910 + sudo git config --global --replace-all safe.directory '*' 00:00:06.971 [Pipeline] nodesByLabel 00:00:06.972 Found a total of 1 nodes with the 'sorcerer' label 00:00:06.981 [Pipeline] httpRequest 00:00:06.985 HttpMethod: GET 00:00:06.986 URL: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:06.988 Sending request to url: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:07.010 Response Code: HTTP/1.1 200 OK 00:00:07.010 Success: Status code 200 is in the accepted range: 200,404 00:00:07.011 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:10.189 [Pipeline] sh 00:00:10.471 + tar --no-same-owner -xf jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:10.493 [Pipeline] httpRequest 00:00:10.498 HttpMethod: GET 00:00:10.498 URL: http://10.211.164.101/packages/spdk_29773365071b8e2775c5fd84455d9767c82e3d56.tar.gz 00:00:10.499 Sending request to url: http://10.211.164.101/packages/spdk_29773365071b8e2775c5fd84455d9767c82e3d56.tar.gz 00:00:10.513 Response Code: HTTP/1.1 200 OK 00:00:10.514 Success: Status code 200 is in the accepted range: 200,404 00:00:10.514 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_29773365071b8e2775c5fd84455d9767c82e3d56.tar.gz 00:00:39.200 [Pipeline] sh 00:00:39.480 + tar --no-same-owner -xf spdk_29773365071b8e2775c5fd84455d9767c82e3d56.tar.gz 00:00:42.016 [Pipeline] sh 00:00:42.295 + git -C spdk log --oneline -n5 00:00:42.295 297733650 nvmf: don't touch subsystem->flags.allow_any_host directly 00:00:42.295 35948d8fa build: rename SPDK_MOCK_SYSCALLS -> SPDK_MOCK_SYMBOLS 00:00:42.295 69872294e nvme: make spdk_nvme_dhchap_get_digest_length() public 00:00:42.295 67ab645cd nvmf/auth: send AUTH_failure1 message 00:00:42.295 c54a29d8f test/nvmf: add auth timeout unit tests 00:00:42.306 [Pipeline] } 00:00:42.322 [Pipeline] // stage 00:00:42.330 [Pipeline] stage 00:00:42.333 [Pipeline] { (Prepare) 00:00:42.352 [Pipeline] writeFile 00:00:42.369 [Pipeline] sh 00:00:42.649 + logger -p user.info -t JENKINS-CI 00:00:42.660 [Pipeline] sh 00:00:42.946 + logger -p user.info -t JENKINS-CI 00:00:42.958 [Pipeline] sh 00:00:43.237 + cat autorun-spdk.conf 00:00:43.237 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:43.237 SPDK_TEST_NVMF=1 00:00:43.237 SPDK_TEST_NVME_CLI=1 00:00:43.237 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:43.237 SPDK_TEST_NVMF_NICS=e810 00:00:43.237 SPDK_TEST_VFIOUSER=1 00:00:43.237 SPDK_RUN_UBSAN=1 00:00:43.237 NET_TYPE=phy 00:00:43.243 RUN_NIGHTLY=0 00:00:43.247 [Pipeline] readFile 00:00:43.268 [Pipeline] withEnv 00:00:43.270 [Pipeline] { 00:00:43.281 [Pipeline] sh 00:00:43.561 + set -ex 00:00:43.561 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:43.561 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:43.561 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:43.561 ++ SPDK_TEST_NVMF=1 00:00:43.561 ++ SPDK_TEST_NVME_CLI=1 00:00:43.561 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:43.561 ++ SPDK_TEST_NVMF_NICS=e810 00:00:43.561 ++ SPDK_TEST_VFIOUSER=1 00:00:43.561 ++ SPDK_RUN_UBSAN=1 00:00:43.561 ++ NET_TYPE=phy 00:00:43.561 ++ RUN_NIGHTLY=0 00:00:43.561 + case $SPDK_TEST_NVMF_NICS in 00:00:43.561 + DRIVERS=ice 00:00:43.561 + [[ tcp == \r\d\m\a ]] 00:00:43.561 + [[ -n ice ]] 00:00:43.561 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:43.561 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:43.561 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:43.561 rmmod: ERROR: Module irdma is not currently loaded 00:00:43.561 rmmod: ERROR: Module i40iw is not currently loaded 00:00:43.561 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:43.561 + true 00:00:43.561 + for D in $DRIVERS 00:00:43.561 + sudo modprobe ice 00:00:43.561 + exit 0 00:00:43.569 [Pipeline] } 00:00:43.586 [Pipeline] // withEnv 00:00:43.590 [Pipeline] } 00:00:43.605 [Pipeline] // stage 00:00:43.614 [Pipeline] catchError 00:00:43.616 [Pipeline] { 00:00:43.631 [Pipeline] timeout 00:00:43.631 Timeout set to expire in 40 min 00:00:43.633 [Pipeline] { 00:00:43.648 [Pipeline] stage 00:00:43.650 [Pipeline] { (Tests) 00:00:43.664 [Pipeline] sh 00:00:43.944 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:43.944 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:43.944 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:43.944 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:43.944 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:43.944 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:43.944 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:43.944 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:43.944 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:43.944 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:43.944 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:43.944 + source /etc/os-release 00:00:43.944 ++ NAME='Fedora Linux' 00:00:43.944 ++ VERSION='38 (Cloud Edition)' 00:00:43.944 ++ ID=fedora 00:00:43.944 ++ VERSION_ID=38 00:00:43.944 ++ VERSION_CODENAME= 00:00:43.944 ++ PLATFORM_ID=platform:f38 00:00:43.944 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:43.944 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:43.944 ++ LOGO=fedora-logo-icon 00:00:43.944 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:43.944 ++ HOME_URL=https://fedoraproject.org/ 00:00:43.944 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:43.944 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:43.944 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:43.944 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:43.944 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:43.944 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:43.944 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:43.944 ++ SUPPORT_END=2024-05-14 00:00:43.944 ++ VARIANT='Cloud Edition' 00:00:43.944 ++ VARIANT_ID=cloud 00:00:43.944 + uname -a 00:00:43.944 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:43.944 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:45.317 Hugepages 00:00:45.317 node hugesize free / total 00:00:45.317 node0 1048576kB 0 / 0 00:00:45.317 node0 2048kB 0 / 0 00:00:45.317 node1 1048576kB 0 / 0 00:00:45.317 node1 2048kB 0 / 0 00:00:45.317 00:00:45.317 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:45.317 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:45.317 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:45.317 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:45.317 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:45.317 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:45.317 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:45.317 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:45.317 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:45.317 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:45.317 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:45.317 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:45.317 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:45.317 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:45.317 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:45.317 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:45.317 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:45.317 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:45.317 + rm -f /tmp/spdk-ld-path 00:00:45.317 + source autorun-spdk.conf 00:00:45.317 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:45.317 ++ SPDK_TEST_NVMF=1 00:00:45.317 ++ SPDK_TEST_NVME_CLI=1 00:00:45.317 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:45.317 ++ SPDK_TEST_NVMF_NICS=e810 00:00:45.317 ++ SPDK_TEST_VFIOUSER=1 00:00:45.317 ++ SPDK_RUN_UBSAN=1 00:00:45.317 ++ NET_TYPE=phy 00:00:45.317 ++ RUN_NIGHTLY=0 00:00:45.317 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:45.317 + [[ -n '' ]] 00:00:45.317 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:45.317 + for M in /var/spdk/build-*-manifest.txt 00:00:45.317 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:45.317 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:45.317 + for M in /var/spdk/build-*-manifest.txt 00:00:45.317 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:45.317 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:45.317 ++ uname 00:00:45.317 + [[ Linux == \L\i\n\u\x ]] 00:00:45.317 + sudo dmesg -T 00:00:45.317 + sudo dmesg --clear 00:00:45.317 + dmesg_pid=1035976 00:00:45.317 + [[ Fedora Linux == FreeBSD ]] 00:00:45.317 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:45.317 + sudo dmesg -Tw 00:00:45.317 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:45.317 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:45.317 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:45.317 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:45.317 + [[ -x /usr/src/fio-static/fio ]] 00:00:45.317 + export FIO_BIN=/usr/src/fio-static/fio 00:00:45.317 + FIO_BIN=/usr/src/fio-static/fio 00:00:45.317 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:45.317 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:45.317 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:45.317 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:45.318 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:45.318 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:45.318 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:45.318 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:45.318 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:45.318 Test configuration: 00:00:45.318 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:45.318 SPDK_TEST_NVMF=1 00:00:45.318 SPDK_TEST_NVME_CLI=1 00:00:45.318 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:45.318 SPDK_TEST_NVMF_NICS=e810 00:00:45.318 SPDK_TEST_VFIOUSER=1 00:00:45.318 SPDK_RUN_UBSAN=1 00:00:45.318 NET_TYPE=phy 00:00:45.318 RUN_NIGHTLY=0 00:47:57 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:45.318 00:47:57 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:45.318 00:47:57 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:45.318 00:47:57 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:45.318 00:47:57 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:45.318 00:47:57 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:45.318 00:47:57 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:45.318 00:47:57 -- paths/export.sh@5 -- $ export PATH 00:00:45.318 00:47:57 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:45.318 00:47:57 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:45.318 00:47:57 -- common/autobuild_common.sh@437 -- $ date +%s 00:00:45.318 00:47:57 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715726877.XXXXXX 00:00:45.318 00:47:57 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715726877.pychwK 00:00:45.318 00:47:57 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:00:45.318 00:47:57 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:00:45.318 00:47:57 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:45.318 00:47:57 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:45.318 00:47:57 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:45.318 00:47:57 -- common/autobuild_common.sh@453 -- $ get_config_params 00:00:45.318 00:47:57 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:00:45.318 00:47:57 -- common/autotest_common.sh@10 -- $ set +x 00:00:45.318 00:47:57 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:45.318 00:47:57 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:00:45.318 00:47:57 -- pm/common@17 -- $ local monitor 00:00:45.318 00:47:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:45.318 00:47:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:45.318 00:47:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:45.318 00:47:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:45.318 00:47:57 -- pm/common@21 -- $ date +%s 00:00:45.318 00:47:57 -- pm/common@21 -- $ date +%s 00:00:45.318 00:47:57 -- pm/common@25 -- $ sleep 1 00:00:45.318 00:47:57 -- pm/common@21 -- $ date +%s 00:00:45.318 00:47:57 -- pm/common@21 -- $ date +%s 00:00:45.318 00:47:57 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715726877 00:00:45.318 00:47:57 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715726877 00:00:45.318 00:47:57 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715726877 00:00:45.318 00:47:57 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715726877 00:00:45.318 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715726877_collect-vmstat.pm.log 00:00:45.318 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715726877_collect-cpu-load.pm.log 00:00:45.318 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715726877_collect-cpu-temp.pm.log 00:00:45.318 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715726877_collect-bmc-pm.bmc.pm.log 00:00:46.253 00:47:58 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:00:46.253 00:47:58 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:46.253 00:47:58 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:46.253 00:47:58 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:46.253 00:47:58 -- spdk/autobuild.sh@16 -- $ date -u 00:00:46.253 Tue May 14 10:47:58 PM UTC 2024 00:00:46.253 00:47:58 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:46.253 v24.05-pre-623-g297733650 00:00:46.253 00:47:58 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:46.253 00:47:58 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:46.253 00:47:58 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:46.253 00:47:58 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:00:46.253 00:47:58 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:46.253 00:47:58 -- common/autotest_common.sh@10 -- $ set +x 00:00:46.253 ************************************ 00:00:46.253 START TEST ubsan 00:00:46.253 ************************************ 00:00:46.253 00:47:58 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:00:46.253 using ubsan 00:00:46.253 00:00:46.253 real 0m0.000s 00:00:46.253 user 0m0.000s 00:00:46.253 sys 0m0.000s 00:00:46.253 00:47:58 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:00:46.253 00:47:58 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:46.253 ************************************ 00:00:46.253 END TEST ubsan 00:00:46.253 ************************************ 00:00:46.511 00:47:58 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:46.511 00:47:58 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:46.511 00:47:58 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:46.511 00:47:58 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:46.511 00:47:58 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:46.511 00:47:58 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:46.511 00:47:58 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:46.511 00:47:58 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:46.511 00:47:58 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:46.511 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:46.511 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:46.769 Using 'verbs' RDMA provider 00:00:57.307 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:07.281 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:07.281 Creating mk/config.mk...done. 00:01:07.281 Creating mk/cc.flags.mk...done. 00:01:07.281 Type 'make' to build. 00:01:07.281 00:48:18 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:07.281 00:48:18 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:07.281 00:48:18 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:07.281 00:48:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:07.281 ************************************ 00:01:07.281 START TEST make 00:01:07.281 ************************************ 00:01:07.281 00:48:18 make -- common/autotest_common.sh@1121 -- $ make -j48 00:01:07.281 make[1]: Nothing to be done for 'all'. 00:01:07.848 The Meson build system 00:01:07.848 Version: 1.3.1 00:01:07.848 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:07.848 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:07.848 Build type: native build 00:01:07.848 Project name: libvfio-user 00:01:07.848 Project version: 0.0.1 00:01:07.848 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:07.848 C linker for the host machine: cc ld.bfd 2.39-16 00:01:07.848 Host machine cpu family: x86_64 00:01:07.848 Host machine cpu: x86_64 00:01:07.848 Run-time dependency threads found: YES 00:01:07.848 Library dl found: YES 00:01:07.848 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:07.848 Run-time dependency json-c found: YES 0.17 00:01:07.848 Run-time dependency cmocka found: YES 1.1.7 00:01:07.848 Program pytest-3 found: NO 00:01:07.848 Program flake8 found: NO 00:01:07.848 Program misspell-fixer found: NO 00:01:07.848 Program restructuredtext-lint found: NO 00:01:07.848 Program valgrind found: YES (/usr/bin/valgrind) 00:01:07.848 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:07.848 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:07.848 Compiler for C supports arguments -Wwrite-strings: YES 00:01:07.848 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:07.848 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:07.848 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:07.848 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:07.848 Build targets in project: 8 00:01:07.848 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:07.848 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:07.848 00:01:07.848 libvfio-user 0.0.1 00:01:07.848 00:01:07.848 User defined options 00:01:07.848 buildtype : debug 00:01:07.848 default_library: shared 00:01:07.848 libdir : /usr/local/lib 00:01:07.848 00:01:07.848 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:08.800 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:08.800 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:08.800 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:08.800 [3/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:08.800 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:08.800 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:08.800 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:09.066 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:09.066 [8/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:09.066 [9/37] Compiling C object samples/null.p/null.c.o 00:01:09.066 [10/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:09.066 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:09.066 [12/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:09.066 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:09.066 [14/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:09.066 [15/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:09.066 [16/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:09.066 [17/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:09.066 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:09.066 [19/37] Compiling C object samples/server.p/server.c.o 00:01:09.066 [20/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:09.066 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:09.066 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:09.066 [23/37] Compiling C object samples/client.p/client.c.o 00:01:09.066 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:09.066 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:09.066 [26/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:09.066 [27/37] Linking target samples/client 00:01:09.066 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:09.327 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:09.327 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:09.327 [31/37] Linking target test/unit_tests 00:01:09.327 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:09.587 [33/37] Linking target samples/null 00:01:09.587 [34/37] Linking target samples/server 00:01:09.587 [35/37] Linking target samples/gpio-pci-idio-16 00:01:09.587 [36/37] Linking target samples/lspci 00:01:09.587 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:09.587 INFO: autodetecting backend as ninja 00:01:09.587 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:09.587 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:10.163 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:10.163 ninja: no work to do. 00:01:15.439 The Meson build system 00:01:15.439 Version: 1.3.1 00:01:15.439 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:15.439 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:15.439 Build type: native build 00:01:15.439 Program cat found: YES (/usr/bin/cat) 00:01:15.439 Project name: DPDK 00:01:15.439 Project version: 23.11.0 00:01:15.439 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:15.439 C linker for the host machine: cc ld.bfd 2.39-16 00:01:15.439 Host machine cpu family: x86_64 00:01:15.439 Host machine cpu: x86_64 00:01:15.439 Message: ## Building in Developer Mode ## 00:01:15.439 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:15.439 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:15.439 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:15.439 Program python3 found: YES (/usr/bin/python3) 00:01:15.439 Program cat found: YES (/usr/bin/cat) 00:01:15.439 Compiler for C supports arguments -march=native: YES 00:01:15.439 Checking for size of "void *" : 8 00:01:15.439 Checking for size of "void *" : 8 (cached) 00:01:15.439 Library m found: YES 00:01:15.439 Library numa found: YES 00:01:15.439 Has header "numaif.h" : YES 00:01:15.439 Library fdt found: NO 00:01:15.439 Library execinfo found: NO 00:01:15.439 Has header "execinfo.h" : YES 00:01:15.439 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:15.439 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:15.439 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:15.439 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:15.439 Run-time dependency openssl found: YES 3.0.9 00:01:15.439 Run-time dependency libpcap found: YES 1.10.4 00:01:15.439 Has header "pcap.h" with dependency libpcap: YES 00:01:15.439 Compiler for C supports arguments -Wcast-qual: YES 00:01:15.439 Compiler for C supports arguments -Wdeprecated: YES 00:01:15.439 Compiler for C supports arguments -Wformat: YES 00:01:15.439 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:15.439 Compiler for C supports arguments -Wformat-security: NO 00:01:15.439 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:15.439 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:15.439 Compiler for C supports arguments -Wnested-externs: YES 00:01:15.439 Compiler for C supports arguments -Wold-style-definition: YES 00:01:15.439 Compiler for C supports arguments -Wpointer-arith: YES 00:01:15.439 Compiler for C supports arguments -Wsign-compare: YES 00:01:15.439 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:15.439 Compiler for C supports arguments -Wundef: YES 00:01:15.439 Compiler for C supports arguments -Wwrite-strings: YES 00:01:15.439 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:15.439 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:15.439 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:15.439 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:15.439 Program objdump found: YES (/usr/bin/objdump) 00:01:15.439 Compiler for C supports arguments -mavx512f: YES 00:01:15.439 Checking if "AVX512 checking" compiles: YES 00:01:15.439 Fetching value of define "__SSE4_2__" : 1 00:01:15.439 Fetching value of define "__AES__" : 1 00:01:15.439 Fetching value of define "__AVX__" : 1 00:01:15.439 Fetching value of define "__AVX2__" : (undefined) 00:01:15.439 Fetching value of define "__AVX512BW__" : (undefined) 00:01:15.439 Fetching value of define "__AVX512CD__" : (undefined) 00:01:15.439 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:15.439 Fetching value of define "__AVX512F__" : (undefined) 00:01:15.439 Fetching value of define "__AVX512VL__" : (undefined) 00:01:15.439 Fetching value of define "__PCLMUL__" : 1 00:01:15.439 Fetching value of define "__RDRND__" : 1 00:01:15.439 Fetching value of define "__RDSEED__" : (undefined) 00:01:15.439 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:15.439 Fetching value of define "__znver1__" : (undefined) 00:01:15.439 Fetching value of define "__znver2__" : (undefined) 00:01:15.439 Fetching value of define "__znver3__" : (undefined) 00:01:15.439 Fetching value of define "__znver4__" : (undefined) 00:01:15.439 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:15.439 Message: lib/log: Defining dependency "log" 00:01:15.439 Message: lib/kvargs: Defining dependency "kvargs" 00:01:15.439 Message: lib/telemetry: Defining dependency "telemetry" 00:01:15.439 Checking for function "getentropy" : NO 00:01:15.439 Message: lib/eal: Defining dependency "eal" 00:01:15.439 Message: lib/ring: Defining dependency "ring" 00:01:15.439 Message: lib/rcu: Defining dependency "rcu" 00:01:15.439 Message: lib/mempool: Defining dependency "mempool" 00:01:15.439 Message: lib/mbuf: Defining dependency "mbuf" 00:01:15.439 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:15.439 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:15.439 Compiler for C supports arguments -mpclmul: YES 00:01:15.439 Compiler for C supports arguments -maes: YES 00:01:15.439 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:15.439 Compiler for C supports arguments -mavx512bw: YES 00:01:15.439 Compiler for C supports arguments -mavx512dq: YES 00:01:15.439 Compiler for C supports arguments -mavx512vl: YES 00:01:15.439 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:15.439 Compiler for C supports arguments -mavx2: YES 00:01:15.439 Compiler for C supports arguments -mavx: YES 00:01:15.439 Message: lib/net: Defining dependency "net" 00:01:15.439 Message: lib/meter: Defining dependency "meter" 00:01:15.439 Message: lib/ethdev: Defining dependency "ethdev" 00:01:15.439 Message: lib/pci: Defining dependency "pci" 00:01:15.439 Message: lib/cmdline: Defining dependency "cmdline" 00:01:15.439 Message: lib/hash: Defining dependency "hash" 00:01:15.439 Message: lib/timer: Defining dependency "timer" 00:01:15.439 Message: lib/compressdev: Defining dependency "compressdev" 00:01:15.439 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:15.439 Message: lib/dmadev: Defining dependency "dmadev" 00:01:15.439 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:15.439 Message: lib/power: Defining dependency "power" 00:01:15.439 Message: lib/reorder: Defining dependency "reorder" 00:01:15.439 Message: lib/security: Defining dependency "security" 00:01:15.439 Has header "linux/userfaultfd.h" : YES 00:01:15.439 Has header "linux/vduse.h" : YES 00:01:15.439 Message: lib/vhost: Defining dependency "vhost" 00:01:15.439 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:15.439 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:15.439 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:15.439 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:15.440 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:15.440 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:15.440 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:15.440 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:15.440 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:15.440 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:15.440 Program doxygen found: YES (/usr/bin/doxygen) 00:01:15.440 Configuring doxy-api-html.conf using configuration 00:01:15.440 Configuring doxy-api-man.conf using configuration 00:01:15.440 Program mandb found: YES (/usr/bin/mandb) 00:01:15.440 Program sphinx-build found: NO 00:01:15.440 Configuring rte_build_config.h using configuration 00:01:15.440 Message: 00:01:15.440 ================= 00:01:15.440 Applications Enabled 00:01:15.440 ================= 00:01:15.440 00:01:15.440 apps: 00:01:15.440 00:01:15.440 00:01:15.440 Message: 00:01:15.440 ================= 00:01:15.440 Libraries Enabled 00:01:15.440 ================= 00:01:15.440 00:01:15.440 libs: 00:01:15.440 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:15.440 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:15.440 cryptodev, dmadev, power, reorder, security, vhost, 00:01:15.440 00:01:15.440 Message: 00:01:15.440 =============== 00:01:15.440 Drivers Enabled 00:01:15.440 =============== 00:01:15.440 00:01:15.440 common: 00:01:15.440 00:01:15.440 bus: 00:01:15.440 pci, vdev, 00:01:15.440 mempool: 00:01:15.440 ring, 00:01:15.440 dma: 00:01:15.440 00:01:15.440 net: 00:01:15.440 00:01:15.440 crypto: 00:01:15.440 00:01:15.440 compress: 00:01:15.440 00:01:15.440 vdpa: 00:01:15.440 00:01:15.440 00:01:15.440 Message: 00:01:15.440 ================= 00:01:15.440 Content Skipped 00:01:15.440 ================= 00:01:15.440 00:01:15.440 apps: 00:01:15.440 dumpcap: explicitly disabled via build config 00:01:15.440 graph: explicitly disabled via build config 00:01:15.440 pdump: explicitly disabled via build config 00:01:15.440 proc-info: explicitly disabled via build config 00:01:15.440 test-acl: explicitly disabled via build config 00:01:15.440 test-bbdev: explicitly disabled via build config 00:01:15.440 test-cmdline: explicitly disabled via build config 00:01:15.440 test-compress-perf: explicitly disabled via build config 00:01:15.440 test-crypto-perf: explicitly disabled via build config 00:01:15.440 test-dma-perf: explicitly disabled via build config 00:01:15.440 test-eventdev: explicitly disabled via build config 00:01:15.440 test-fib: explicitly disabled via build config 00:01:15.440 test-flow-perf: explicitly disabled via build config 00:01:15.440 test-gpudev: explicitly disabled via build config 00:01:15.440 test-mldev: explicitly disabled via build config 00:01:15.440 test-pipeline: explicitly disabled via build config 00:01:15.440 test-pmd: explicitly disabled via build config 00:01:15.440 test-regex: explicitly disabled via build config 00:01:15.440 test-sad: explicitly disabled via build config 00:01:15.440 test-security-perf: explicitly disabled via build config 00:01:15.440 00:01:15.440 libs: 00:01:15.440 metrics: explicitly disabled via build config 00:01:15.440 acl: explicitly disabled via build config 00:01:15.440 bbdev: explicitly disabled via build config 00:01:15.440 bitratestats: explicitly disabled via build config 00:01:15.440 bpf: explicitly disabled via build config 00:01:15.440 cfgfile: explicitly disabled via build config 00:01:15.440 distributor: explicitly disabled via build config 00:01:15.440 efd: explicitly disabled via build config 00:01:15.440 eventdev: explicitly disabled via build config 00:01:15.440 dispatcher: explicitly disabled via build config 00:01:15.440 gpudev: explicitly disabled via build config 00:01:15.440 gro: explicitly disabled via build config 00:01:15.440 gso: explicitly disabled via build config 00:01:15.440 ip_frag: explicitly disabled via build config 00:01:15.440 jobstats: explicitly disabled via build config 00:01:15.440 latencystats: explicitly disabled via build config 00:01:15.440 lpm: explicitly disabled via build config 00:01:15.440 member: explicitly disabled via build config 00:01:15.440 pcapng: explicitly disabled via build config 00:01:15.440 rawdev: explicitly disabled via build config 00:01:15.440 regexdev: explicitly disabled via build config 00:01:15.440 mldev: explicitly disabled via build config 00:01:15.440 rib: explicitly disabled via build config 00:01:15.440 sched: explicitly disabled via build config 00:01:15.440 stack: explicitly disabled via build config 00:01:15.440 ipsec: explicitly disabled via build config 00:01:15.440 pdcp: explicitly disabled via build config 00:01:15.440 fib: explicitly disabled via build config 00:01:15.440 port: explicitly disabled via build config 00:01:15.440 pdump: explicitly disabled via build config 00:01:15.440 table: explicitly disabled via build config 00:01:15.440 pipeline: explicitly disabled via build config 00:01:15.440 graph: explicitly disabled via build config 00:01:15.440 node: explicitly disabled via build config 00:01:15.440 00:01:15.440 drivers: 00:01:15.440 common/cpt: not in enabled drivers build config 00:01:15.440 common/dpaax: not in enabled drivers build config 00:01:15.440 common/iavf: not in enabled drivers build config 00:01:15.440 common/idpf: not in enabled drivers build config 00:01:15.440 common/mvep: not in enabled drivers build config 00:01:15.440 common/octeontx: not in enabled drivers build config 00:01:15.440 bus/auxiliary: not in enabled drivers build config 00:01:15.440 bus/cdx: not in enabled drivers build config 00:01:15.440 bus/dpaa: not in enabled drivers build config 00:01:15.440 bus/fslmc: not in enabled drivers build config 00:01:15.440 bus/ifpga: not in enabled drivers build config 00:01:15.440 bus/platform: not in enabled drivers build config 00:01:15.440 bus/vmbus: not in enabled drivers build config 00:01:15.440 common/cnxk: not in enabled drivers build config 00:01:15.440 common/mlx5: not in enabled drivers build config 00:01:15.440 common/nfp: not in enabled drivers build config 00:01:15.440 common/qat: not in enabled drivers build config 00:01:15.440 common/sfc_efx: not in enabled drivers build config 00:01:15.440 mempool/bucket: not in enabled drivers build config 00:01:15.440 mempool/cnxk: not in enabled drivers build config 00:01:15.440 mempool/dpaa: not in enabled drivers build config 00:01:15.440 mempool/dpaa2: not in enabled drivers build config 00:01:15.440 mempool/octeontx: not in enabled drivers build config 00:01:15.440 mempool/stack: not in enabled drivers build config 00:01:15.440 dma/cnxk: not in enabled drivers build config 00:01:15.440 dma/dpaa: not in enabled drivers build config 00:01:15.440 dma/dpaa2: not in enabled drivers build config 00:01:15.440 dma/hisilicon: not in enabled drivers build config 00:01:15.440 dma/idxd: not in enabled drivers build config 00:01:15.440 dma/ioat: not in enabled drivers build config 00:01:15.440 dma/skeleton: not in enabled drivers build config 00:01:15.440 net/af_packet: not in enabled drivers build config 00:01:15.440 net/af_xdp: not in enabled drivers build config 00:01:15.440 net/ark: not in enabled drivers build config 00:01:15.440 net/atlantic: not in enabled drivers build config 00:01:15.440 net/avp: not in enabled drivers build config 00:01:15.440 net/axgbe: not in enabled drivers build config 00:01:15.440 net/bnx2x: not in enabled drivers build config 00:01:15.440 net/bnxt: not in enabled drivers build config 00:01:15.440 net/bonding: not in enabled drivers build config 00:01:15.440 net/cnxk: not in enabled drivers build config 00:01:15.440 net/cpfl: not in enabled drivers build config 00:01:15.440 net/cxgbe: not in enabled drivers build config 00:01:15.440 net/dpaa: not in enabled drivers build config 00:01:15.440 net/dpaa2: not in enabled drivers build config 00:01:15.440 net/e1000: not in enabled drivers build config 00:01:15.440 net/ena: not in enabled drivers build config 00:01:15.440 net/enetc: not in enabled drivers build config 00:01:15.440 net/enetfec: not in enabled drivers build config 00:01:15.440 net/enic: not in enabled drivers build config 00:01:15.440 net/failsafe: not in enabled drivers build config 00:01:15.440 net/fm10k: not in enabled drivers build config 00:01:15.440 net/gve: not in enabled drivers build config 00:01:15.440 net/hinic: not in enabled drivers build config 00:01:15.440 net/hns3: not in enabled drivers build config 00:01:15.440 net/i40e: not in enabled drivers build config 00:01:15.440 net/iavf: not in enabled drivers build config 00:01:15.440 net/ice: not in enabled drivers build config 00:01:15.440 net/idpf: not in enabled drivers build config 00:01:15.440 net/igc: not in enabled drivers build config 00:01:15.440 net/ionic: not in enabled drivers build config 00:01:15.440 net/ipn3ke: not in enabled drivers build config 00:01:15.440 net/ixgbe: not in enabled drivers build config 00:01:15.440 net/mana: not in enabled drivers build config 00:01:15.440 net/memif: not in enabled drivers build config 00:01:15.440 net/mlx4: not in enabled drivers build config 00:01:15.440 net/mlx5: not in enabled drivers build config 00:01:15.440 net/mvneta: not in enabled drivers build config 00:01:15.440 net/mvpp2: not in enabled drivers build config 00:01:15.440 net/netvsc: not in enabled drivers build config 00:01:15.440 net/nfb: not in enabled drivers build config 00:01:15.440 net/nfp: not in enabled drivers build config 00:01:15.440 net/ngbe: not in enabled drivers build config 00:01:15.440 net/null: not in enabled drivers build config 00:01:15.440 net/octeontx: not in enabled drivers build config 00:01:15.440 net/octeon_ep: not in enabled drivers build config 00:01:15.440 net/pcap: not in enabled drivers build config 00:01:15.440 net/pfe: not in enabled drivers build config 00:01:15.440 net/qede: not in enabled drivers build config 00:01:15.440 net/ring: not in enabled drivers build config 00:01:15.440 net/sfc: not in enabled drivers build config 00:01:15.440 net/softnic: not in enabled drivers build config 00:01:15.440 net/tap: not in enabled drivers build config 00:01:15.440 net/thunderx: not in enabled drivers build config 00:01:15.440 net/txgbe: not in enabled drivers build config 00:01:15.440 net/vdev_netvsc: not in enabled drivers build config 00:01:15.440 net/vhost: not in enabled drivers build config 00:01:15.440 net/virtio: not in enabled drivers build config 00:01:15.440 net/vmxnet3: not in enabled drivers build config 00:01:15.440 raw/*: missing internal dependency, "rawdev" 00:01:15.440 crypto/armv8: not in enabled drivers build config 00:01:15.440 crypto/bcmfs: not in enabled drivers build config 00:01:15.441 crypto/caam_jr: not in enabled drivers build config 00:01:15.441 crypto/ccp: not in enabled drivers build config 00:01:15.441 crypto/cnxk: not in enabled drivers build config 00:01:15.441 crypto/dpaa_sec: not in enabled drivers build config 00:01:15.441 crypto/dpaa2_sec: not in enabled drivers build config 00:01:15.441 crypto/ipsec_mb: not in enabled drivers build config 00:01:15.441 crypto/mlx5: not in enabled drivers build config 00:01:15.441 crypto/mvsam: not in enabled drivers build config 00:01:15.441 crypto/nitrox: not in enabled drivers build config 00:01:15.441 crypto/null: not in enabled drivers build config 00:01:15.441 crypto/octeontx: not in enabled drivers build config 00:01:15.441 crypto/openssl: not in enabled drivers build config 00:01:15.441 crypto/scheduler: not in enabled drivers build config 00:01:15.441 crypto/uadk: not in enabled drivers build config 00:01:15.441 crypto/virtio: not in enabled drivers build config 00:01:15.441 compress/isal: not in enabled drivers build config 00:01:15.441 compress/mlx5: not in enabled drivers build config 00:01:15.441 compress/octeontx: not in enabled drivers build config 00:01:15.441 compress/zlib: not in enabled drivers build config 00:01:15.441 regex/*: missing internal dependency, "regexdev" 00:01:15.441 ml/*: missing internal dependency, "mldev" 00:01:15.441 vdpa/ifc: not in enabled drivers build config 00:01:15.441 vdpa/mlx5: not in enabled drivers build config 00:01:15.441 vdpa/nfp: not in enabled drivers build config 00:01:15.441 vdpa/sfc: not in enabled drivers build config 00:01:15.441 event/*: missing internal dependency, "eventdev" 00:01:15.441 baseband/*: missing internal dependency, "bbdev" 00:01:15.441 gpu/*: missing internal dependency, "gpudev" 00:01:15.441 00:01:15.441 00:01:15.441 Build targets in project: 85 00:01:15.441 00:01:15.441 DPDK 23.11.0 00:01:15.441 00:01:15.441 User defined options 00:01:15.441 buildtype : debug 00:01:15.441 default_library : shared 00:01:15.441 libdir : lib 00:01:15.441 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:15.441 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:15.441 c_link_args : 00:01:15.441 cpu_instruction_set: native 00:01:15.441 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:15.441 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:15.441 enable_docs : false 00:01:15.441 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:15.441 enable_kmods : false 00:01:15.441 tests : false 00:01:15.441 00:01:15.441 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:15.710 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:15.710 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:15.710 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:15.710 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:15.710 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:15.710 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:15.710 [6/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:15.710 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:15.710 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:15.710 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:15.710 [10/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:15.710 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:15.710 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:15.710 [13/265] Linking static target lib/librte_kvargs.a 00:01:15.710 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:15.710 [15/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:15.710 [16/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:15.710 [17/265] Linking static target lib/librte_log.a 00:01:15.710 [18/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:15.969 [19/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:15.969 [20/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:15.969 [21/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:16.545 [22/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.545 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:16.545 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:16.545 [25/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:16.545 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:16.546 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:16.546 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:16.546 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:16.546 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:16.546 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:16.546 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:16.546 [33/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:16.546 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:16.546 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:16.546 [36/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:16.546 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:16.546 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:16.546 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:16.546 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:16.546 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:16.546 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:16.546 [43/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:16.546 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:16.811 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:16.811 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:16.811 [47/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:16.811 [48/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:16.811 [49/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:16.811 [50/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:16.811 [51/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:16.811 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:16.811 [53/265] Linking static target lib/librte_telemetry.a 00:01:16.811 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:16.811 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:16.811 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:16.811 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:16.811 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:16.811 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:16.811 [60/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:16.811 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:16.811 [62/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:16.811 [63/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:16.811 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:16.811 [65/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:17.070 [66/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:17.070 [67/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:17.070 [68/265] Linking static target lib/librte_pci.a 00:01:17.070 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:17.070 [70/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:17.070 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:17.070 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:17.070 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:17.070 [74/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:17.070 [75/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:17.070 [76/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.070 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:17.070 [78/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:17.070 [79/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:17.070 [80/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:17.070 [81/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:17.070 [82/265] Linking target lib/librte_log.so.24.0 00:01:17.332 [83/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:17.332 [84/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:17.332 [85/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:17.332 [86/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:17.332 [87/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:17.332 [88/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:17.332 [89/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:17.592 [90/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.592 [91/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:17.592 [92/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:17.592 [93/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:17.592 [94/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:17.592 [95/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:17.592 [96/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:17.592 [97/265] Linking target lib/librte_kvargs.so.24.0 00:01:17.592 [98/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:17.592 [99/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:17.592 [100/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:17.592 [101/265] Linking static target lib/librte_eal.a 00:01:17.592 [102/265] Linking static target lib/librte_meter.a 00:01:17.592 [103/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:17.592 [104/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:17.592 [105/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:17.592 [106/265] Linking static target lib/librte_ring.a 00:01:17.592 [107/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:17.862 [108/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:17.862 [109/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:17.862 [110/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.862 [111/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:17.862 [112/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:17.862 [113/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:17.862 [114/265] Linking target lib/librte_telemetry.so.24.0 00:01:17.862 [115/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:17.862 [116/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:17.862 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:17.862 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:17.862 [119/265] Linking static target lib/librte_rcu.a 00:01:17.862 [120/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:17.862 [121/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:17.863 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:17.863 [123/265] Linking static target lib/librte_mempool.a 00:01:17.863 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:18.128 [125/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:18.128 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:18.128 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:18.128 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:18.128 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:18.128 [130/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:18.128 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:18.128 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:18.128 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:18.128 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:18.128 [135/265] Linking static target lib/librte_cmdline.a 00:01:18.128 [136/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:18.128 [137/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:18.128 [138/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.128 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:18.390 [140/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:18.390 [141/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:18.390 [142/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.390 [143/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:18.390 [144/265] Linking static target lib/librte_net.a 00:01:18.390 [145/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:18.390 [146/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:18.390 [147/265] Linking static target lib/librte_timer.a 00:01:18.390 [148/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:18.390 [149/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:18.390 [150/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.648 [151/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:18.649 [152/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:18.649 [153/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:18.649 [154/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:18.649 [155/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:18.649 [156/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.649 [157/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:18.649 [158/265] Linking static target lib/librte_dmadev.a 00:01:18.649 [159/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:18.649 [160/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:18.907 [161/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:18.907 [162/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:18.907 [163/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:18.907 [164/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.907 [165/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:18.907 [166/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:18.907 [167/265] Linking static target lib/librte_compressdev.a 00:01:18.907 [168/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.907 [169/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:18.907 [170/265] Linking static target lib/librte_hash.a 00:01:18.907 [171/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:18.907 [172/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:19.166 [173/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:19.166 [174/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:19.166 [175/265] Linking static target lib/librte_power.a 00:01:19.166 [176/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:19.166 [177/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:19.166 [178/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:19.166 [179/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:19.166 [180/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:19.166 [181/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:19.166 [182/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:19.166 [183/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:19.166 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:19.166 [185/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.166 [186/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.166 [187/265] Linking static target lib/librte_reorder.a 00:01:19.166 [188/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:19.166 [189/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:19.166 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:19.166 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:19.424 [192/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:19.424 [193/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:19.424 [194/265] Linking static target lib/librte_mbuf.a 00:01:19.424 [195/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.424 [196/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:19.424 [197/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:19.424 [198/265] Linking static target drivers/librte_bus_vdev.a 00:01:19.424 [199/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:19.424 [200/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:19.424 [201/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:19.424 [202/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:19.424 [203/265] Linking static target lib/librte_security.a 00:01:19.424 [204/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.424 [205/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.424 [206/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:19.424 [207/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:19.424 [208/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:19.424 [209/265] Linking static target drivers/librte_bus_pci.a 00:01:19.682 [210/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:19.682 [211/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.682 [212/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:19.682 [213/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:19.682 [214/265] Linking static target drivers/librte_mempool_ring.a 00:01:19.682 [215/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.682 [216/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:19.682 [217/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.682 [218/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.940 [219/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:19.940 [220/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:19.940 [221/265] Linking static target lib/librte_cryptodev.a 00:01:19.940 [222/265] Linking static target lib/librte_ethdev.a 00:01:19.940 [223/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.873 [224/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.244 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:24.147 [226/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.147 [227/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.147 [228/265] Linking target lib/librte_eal.so.24.0 00:01:24.147 [229/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:24.148 [230/265] Linking target lib/librte_ring.so.24.0 00:01:24.148 [231/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:24.148 [232/265] Linking target lib/librte_dmadev.so.24.0 00:01:24.148 [233/265] Linking target lib/librte_pci.so.24.0 00:01:24.148 [234/265] Linking target lib/librte_timer.so.24.0 00:01:24.148 [235/265] Linking target lib/librte_meter.so.24.0 00:01:24.441 [236/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:24.441 [237/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:24.441 [238/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:24.441 [239/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:24.441 [240/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:24.441 [241/265] Linking target lib/librte_rcu.so.24.0 00:01:24.441 [242/265] Linking target lib/librte_mempool.so.24.0 00:01:24.441 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:24.441 [244/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:24.441 [245/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:24.441 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:24.441 [247/265] Linking target lib/librte_mbuf.so.24.0 00:01:24.699 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:24.699 [249/265] Linking target lib/librte_reorder.so.24.0 00:01:24.699 [250/265] Linking target lib/librte_compressdev.so.24.0 00:01:24.699 [251/265] Linking target lib/librte_net.so.24.0 00:01:24.699 [252/265] Linking target lib/librte_cryptodev.so.24.0 00:01:24.957 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:24.957 [254/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:24.957 [255/265] Linking target lib/librte_hash.so.24.0 00:01:24.957 [256/265] Linking target lib/librte_cmdline.so.24.0 00:01:24.957 [257/265] Linking target lib/librte_security.so.24.0 00:01:24.957 [258/265] Linking target lib/librte_ethdev.so.24.0 00:01:24.957 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:24.957 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:24.957 [261/265] Linking target lib/librte_power.so.24.0 00:01:27.481 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:27.481 [263/265] Linking static target lib/librte_vhost.a 00:01:28.414 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.414 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:28.414 INFO: autodetecting backend as ninja 00:01:28.414 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:29.347 CC lib/ut/ut.o 00:01:29.347 CC lib/ut_mock/mock.o 00:01:29.347 CC lib/log/log.o 00:01:29.347 CC lib/log/log_flags.o 00:01:29.347 CC lib/log/log_deprecated.o 00:01:29.604 LIB libspdk_ut_mock.a 00:01:29.604 SO libspdk_ut_mock.so.6.0 00:01:29.604 LIB libspdk_ut.a 00:01:29.604 LIB libspdk_log.a 00:01:29.604 SO libspdk_ut.so.2.0 00:01:29.604 SO libspdk_log.so.7.0 00:01:29.604 SYMLINK libspdk_ut_mock.so 00:01:29.604 SYMLINK libspdk_ut.so 00:01:29.604 SYMLINK libspdk_log.so 00:01:29.862 CC lib/dma/dma.o 00:01:29.862 CXX lib/trace_parser/trace.o 00:01:29.862 CC lib/ioat/ioat.o 00:01:29.862 CC lib/util/base64.o 00:01:29.862 CC lib/util/bit_array.o 00:01:29.862 CC lib/util/cpuset.o 00:01:29.862 CC lib/util/crc16.o 00:01:29.862 CC lib/util/crc32.o 00:01:29.862 CC lib/util/crc32c.o 00:01:29.862 CC lib/util/crc32_ieee.o 00:01:29.862 CC lib/util/crc64.o 00:01:29.862 CC lib/util/dif.o 00:01:29.862 CC lib/util/fd.o 00:01:29.862 CC lib/util/file.o 00:01:29.862 CC lib/util/hexlify.o 00:01:29.862 CC lib/util/iov.o 00:01:29.862 CC lib/util/math.o 00:01:29.862 CC lib/util/pipe.o 00:01:29.862 CC lib/util/strerror_tls.o 00:01:29.862 CC lib/util/string.o 00:01:29.862 CC lib/util/uuid.o 00:01:29.862 CC lib/util/fd_group.o 00:01:29.862 CC lib/util/xor.o 00:01:29.862 CC lib/util/zipf.o 00:01:29.862 CC lib/vfio_user/host/vfio_user_pci.o 00:01:29.862 CC lib/vfio_user/host/vfio_user.o 00:01:30.119 LIB libspdk_dma.a 00:01:30.119 SO libspdk_dma.so.4.0 00:01:30.119 SYMLINK libspdk_dma.so 00:01:30.119 LIB libspdk_ioat.a 00:01:30.119 SO libspdk_ioat.so.7.0 00:01:30.119 SYMLINK libspdk_ioat.so 00:01:30.119 LIB libspdk_vfio_user.a 00:01:30.119 SO libspdk_vfio_user.so.5.0 00:01:30.376 SYMLINK libspdk_vfio_user.so 00:01:30.376 LIB libspdk_util.a 00:01:30.377 SO libspdk_util.so.9.0 00:01:30.634 SYMLINK libspdk_util.so 00:01:30.634 CC lib/conf/conf.o 00:01:30.634 CC lib/rdma/common.o 00:01:30.634 CC lib/json/json_parse.o 00:01:30.634 CC lib/json/json_util.o 00:01:30.634 CC lib/rdma/rdma_verbs.o 00:01:30.634 CC lib/idxd/idxd.o 00:01:30.634 CC lib/json/json_write.o 00:01:30.634 CC lib/idxd/idxd_user.o 00:01:30.634 CC lib/vmd/vmd.o 00:01:30.634 CC lib/env_dpdk/env.o 00:01:30.634 CC lib/vmd/led.o 00:01:30.634 CC lib/env_dpdk/memory.o 00:01:30.634 CC lib/env_dpdk/pci.o 00:01:30.634 CC lib/env_dpdk/init.o 00:01:30.634 CC lib/env_dpdk/threads.o 00:01:30.634 CC lib/env_dpdk/pci_ioat.o 00:01:30.634 CC lib/env_dpdk/pci_virtio.o 00:01:30.634 CC lib/env_dpdk/pci_vmd.o 00:01:30.634 CC lib/env_dpdk/pci_idxd.o 00:01:30.634 CC lib/env_dpdk/pci_event.o 00:01:30.634 CC lib/env_dpdk/sigbus_handler.o 00:01:30.634 CC lib/env_dpdk/pci_dpdk.o 00:01:30.634 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:30.634 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:30.891 LIB libspdk_trace_parser.a 00:01:30.891 SO libspdk_trace_parser.so.5.0 00:01:30.891 SYMLINK libspdk_trace_parser.so 00:01:30.891 LIB libspdk_conf.a 00:01:30.891 SO libspdk_conf.so.6.0 00:01:31.150 LIB libspdk_rdma.a 00:01:31.150 SYMLINK libspdk_conf.so 00:01:31.150 SO libspdk_rdma.so.6.0 00:01:31.150 LIB libspdk_json.a 00:01:31.150 SO libspdk_json.so.6.0 00:01:31.151 SYMLINK libspdk_rdma.so 00:01:31.151 SYMLINK libspdk_json.so 00:01:31.151 LIB libspdk_idxd.a 00:01:31.409 SO libspdk_idxd.so.12.0 00:01:31.409 CC lib/jsonrpc/jsonrpc_server.o 00:01:31.409 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:31.409 CC lib/jsonrpc/jsonrpc_client.o 00:01:31.409 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:31.409 SYMLINK libspdk_idxd.so 00:01:31.409 LIB libspdk_vmd.a 00:01:31.409 SO libspdk_vmd.so.6.0 00:01:31.409 SYMLINK libspdk_vmd.so 00:01:31.667 LIB libspdk_jsonrpc.a 00:01:31.667 SO libspdk_jsonrpc.so.6.0 00:01:31.667 SYMLINK libspdk_jsonrpc.so 00:01:31.924 CC lib/rpc/rpc.o 00:01:32.182 LIB libspdk_rpc.a 00:01:32.182 SO libspdk_rpc.so.6.0 00:01:32.182 SYMLINK libspdk_rpc.so 00:01:32.440 CC lib/keyring/keyring.o 00:01:32.440 CC lib/keyring/keyring_rpc.o 00:01:32.440 CC lib/trace/trace.o 00:01:32.440 CC lib/notify/notify.o 00:01:32.440 CC lib/trace/trace_flags.o 00:01:32.440 CC lib/notify/notify_rpc.o 00:01:32.440 CC lib/trace/trace_rpc.o 00:01:32.440 LIB libspdk_notify.a 00:01:32.440 SO libspdk_notify.so.6.0 00:01:32.440 SYMLINK libspdk_notify.so 00:01:32.440 LIB libspdk_keyring.a 00:01:32.698 LIB libspdk_trace.a 00:01:32.698 SO libspdk_keyring.so.1.0 00:01:32.698 SO libspdk_trace.so.10.0 00:01:32.698 SYMLINK libspdk_keyring.so 00:01:32.698 SYMLINK libspdk_trace.so 00:01:32.698 LIB libspdk_env_dpdk.a 00:01:32.698 SO libspdk_env_dpdk.so.14.0 00:01:32.956 CC lib/sock/sock.o 00:01:32.956 CC lib/sock/sock_rpc.o 00:01:32.956 CC lib/thread/thread.o 00:01:32.956 CC lib/thread/iobuf.o 00:01:32.956 SYMLINK libspdk_env_dpdk.so 00:01:33.214 LIB libspdk_sock.a 00:01:33.214 SO libspdk_sock.so.9.0 00:01:33.214 SYMLINK libspdk_sock.so 00:01:33.472 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:33.472 CC lib/nvme/nvme_ctrlr.o 00:01:33.472 CC lib/nvme/nvme_fabric.o 00:01:33.472 CC lib/nvme/nvme_ns_cmd.o 00:01:33.472 CC lib/nvme/nvme_ns.o 00:01:33.472 CC lib/nvme/nvme_pcie_common.o 00:01:33.472 CC lib/nvme/nvme_pcie.o 00:01:33.472 CC lib/nvme/nvme_qpair.o 00:01:33.472 CC lib/nvme/nvme.o 00:01:33.472 CC lib/nvme/nvme_quirks.o 00:01:33.472 CC lib/nvme/nvme_transport.o 00:01:33.472 CC lib/nvme/nvme_discovery.o 00:01:33.472 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:33.472 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:33.472 CC lib/nvme/nvme_tcp.o 00:01:33.472 CC lib/nvme/nvme_opal.o 00:01:33.472 CC lib/nvme/nvme_io_msg.o 00:01:33.472 CC lib/nvme/nvme_poll_group.o 00:01:33.472 CC lib/nvme/nvme_zns.o 00:01:33.472 CC lib/nvme/nvme_stubs.o 00:01:33.472 CC lib/nvme/nvme_auth.o 00:01:33.472 CC lib/nvme/nvme_cuse.o 00:01:33.472 CC lib/nvme/nvme_vfio_user.o 00:01:33.472 CC lib/nvme/nvme_rdma.o 00:01:34.403 LIB libspdk_thread.a 00:01:34.403 SO libspdk_thread.so.10.0 00:01:34.403 SYMLINK libspdk_thread.so 00:01:34.661 CC lib/init/json_config.o 00:01:34.661 CC lib/blob/blobstore.o 00:01:34.661 CC lib/init/subsystem.o 00:01:34.661 CC lib/blob/request.o 00:01:34.661 CC lib/accel/accel.o 00:01:34.661 CC lib/init/subsystem_rpc.o 00:01:34.661 CC lib/accel/accel_rpc.o 00:01:34.661 CC lib/init/rpc.o 00:01:34.661 CC lib/blob/zeroes.o 00:01:34.661 CC lib/accel/accel_sw.o 00:01:34.661 CC lib/virtio/virtio.o 00:01:34.661 CC lib/blob/blob_bs_dev.o 00:01:34.661 CC lib/virtio/virtio_vhost_user.o 00:01:34.661 CC lib/virtio/virtio_vfio_user.o 00:01:34.661 CC lib/vfu_tgt/tgt_endpoint.o 00:01:34.661 CC lib/vfu_tgt/tgt_rpc.o 00:01:34.661 CC lib/virtio/virtio_pci.o 00:01:34.920 LIB libspdk_init.a 00:01:34.920 SO libspdk_init.so.5.0 00:01:34.920 LIB libspdk_virtio.a 00:01:34.920 LIB libspdk_vfu_tgt.a 00:01:34.920 SYMLINK libspdk_init.so 00:01:34.920 SO libspdk_vfu_tgt.so.3.0 00:01:34.920 SO libspdk_virtio.so.7.0 00:01:35.180 SYMLINK libspdk_vfu_tgt.so 00:01:35.180 SYMLINK libspdk_virtio.so 00:01:35.180 CC lib/event/app.o 00:01:35.180 CC lib/event/reactor.o 00:01:35.180 CC lib/event/log_rpc.o 00:01:35.180 CC lib/event/app_rpc.o 00:01:35.180 CC lib/event/scheduler_static.o 00:01:35.745 LIB libspdk_event.a 00:01:35.745 SO libspdk_event.so.13.0 00:01:35.745 LIB libspdk_accel.a 00:01:35.745 SYMLINK libspdk_event.so 00:01:35.745 SO libspdk_accel.so.15.0 00:01:35.745 SYMLINK libspdk_accel.so 00:01:35.745 LIB libspdk_nvme.a 00:01:36.002 SO libspdk_nvme.so.13.0 00:01:36.002 CC lib/bdev/bdev.o 00:01:36.002 CC lib/bdev/bdev_rpc.o 00:01:36.002 CC lib/bdev/bdev_zone.o 00:01:36.002 CC lib/bdev/part.o 00:01:36.002 CC lib/bdev/scsi_nvme.o 00:01:36.260 SYMLINK libspdk_nvme.so 00:01:37.633 LIB libspdk_blob.a 00:01:37.633 SO libspdk_blob.so.11.0 00:01:37.633 SYMLINK libspdk_blob.so 00:01:37.891 CC lib/lvol/lvol.o 00:01:37.891 CC lib/blobfs/blobfs.o 00:01:37.891 CC lib/blobfs/tree.o 00:01:38.461 LIB libspdk_bdev.a 00:01:38.461 SO libspdk_bdev.so.15.0 00:01:38.778 SYMLINK libspdk_bdev.so 00:01:38.778 LIB libspdk_blobfs.a 00:01:38.778 SO libspdk_blobfs.so.10.0 00:01:38.778 LIB libspdk_lvol.a 00:01:38.778 SYMLINK libspdk_blobfs.so 00:01:38.778 CC lib/ublk/ublk.o 00:01:38.778 CC lib/nbd/nbd.o 00:01:38.778 CC lib/scsi/dev.o 00:01:38.778 CC lib/ftl/ftl_core.o 00:01:38.778 CC lib/ublk/ublk_rpc.o 00:01:38.778 CC lib/nvmf/ctrlr.o 00:01:38.778 CC lib/nbd/nbd_rpc.o 00:01:38.778 CC lib/scsi/lun.o 00:01:38.778 CC lib/ftl/ftl_init.o 00:01:38.778 CC lib/nvmf/ctrlr_discovery.o 00:01:38.778 CC lib/scsi/port.o 00:01:38.778 CC lib/ftl/ftl_layout.o 00:01:38.778 CC lib/nvmf/ctrlr_bdev.o 00:01:38.778 CC lib/scsi/scsi.o 00:01:38.778 CC lib/ftl/ftl_debug.o 00:01:38.778 CC lib/nvmf/subsystem.o 00:01:38.778 CC lib/ftl/ftl_io.o 00:01:38.778 CC lib/scsi/scsi_bdev.o 00:01:38.778 CC lib/ftl/ftl_sb.o 00:01:38.778 CC lib/scsi/scsi_rpc.o 00:01:38.778 CC lib/scsi/scsi_pr.o 00:01:38.778 CC lib/ftl/ftl_l2p.o 00:01:38.778 CC lib/nvmf/nvmf.o 00:01:38.778 CC lib/scsi/task.o 00:01:38.778 CC lib/ftl/ftl_l2p_flat.o 00:01:38.778 CC lib/nvmf/nvmf_rpc.o 00:01:38.778 CC lib/nvmf/transport.o 00:01:38.778 CC lib/ftl/ftl_nv_cache.o 00:01:38.778 CC lib/nvmf/tcp.o 00:01:38.778 CC lib/ftl/ftl_band.o 00:01:38.778 CC lib/nvmf/stubs.o 00:01:38.778 CC lib/ftl/ftl_band_ops.o 00:01:38.778 CC lib/nvmf/vfio_user.o 00:01:38.778 CC lib/ftl/ftl_writer.o 00:01:38.778 CC lib/nvmf/rdma.o 00:01:38.778 CC lib/ftl/ftl_rq.o 00:01:38.778 CC lib/nvmf/auth.o 00:01:38.778 CC lib/ftl/ftl_reloc.o 00:01:38.778 CC lib/ftl/ftl_l2p_cache.o 00:01:38.778 CC lib/ftl/ftl_p2l.o 00:01:38.778 CC lib/ftl/mngt/ftl_mngt.o 00:01:38.778 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:38.778 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:38.778 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:38.778 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:38.778 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:38.778 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:38.778 SO libspdk_lvol.so.10.0 00:01:39.055 SYMLINK libspdk_lvol.so 00:01:39.055 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:39.055 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:39.055 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:39.055 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:39.316 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:39.316 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:39.316 CC lib/ftl/utils/ftl_conf.o 00:01:39.316 CC lib/ftl/utils/ftl_md.o 00:01:39.316 CC lib/ftl/utils/ftl_mempool.o 00:01:39.316 CC lib/ftl/utils/ftl_bitmap.o 00:01:39.316 CC lib/ftl/utils/ftl_property.o 00:01:39.316 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:39.316 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:39.316 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:39.316 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:39.316 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:39.316 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:39.316 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:39.316 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:39.316 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:39.316 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:39.316 CC lib/ftl/base/ftl_base_dev.o 00:01:39.316 CC lib/ftl/base/ftl_base_bdev.o 00:01:39.316 CC lib/ftl/ftl_trace.o 00:01:39.574 LIB libspdk_nbd.a 00:01:39.574 SO libspdk_nbd.so.7.0 00:01:39.574 SYMLINK libspdk_nbd.so 00:01:39.832 LIB libspdk_scsi.a 00:01:39.832 SO libspdk_scsi.so.9.0 00:01:39.832 LIB libspdk_ublk.a 00:01:39.832 SYMLINK libspdk_scsi.so 00:01:39.832 SO libspdk_ublk.so.3.0 00:01:39.832 SYMLINK libspdk_ublk.so 00:01:40.090 CC lib/vhost/vhost.o 00:01:40.090 CC lib/iscsi/conn.o 00:01:40.090 CC lib/vhost/vhost_rpc.o 00:01:40.090 CC lib/iscsi/init_grp.o 00:01:40.090 CC lib/iscsi/iscsi.o 00:01:40.090 CC lib/vhost/vhost_scsi.o 00:01:40.090 CC lib/vhost/vhost_blk.o 00:01:40.090 CC lib/iscsi/md5.o 00:01:40.090 CC lib/vhost/rte_vhost_user.o 00:01:40.090 CC lib/iscsi/param.o 00:01:40.090 CC lib/iscsi/portal_grp.o 00:01:40.090 CC lib/iscsi/tgt_node.o 00:01:40.090 CC lib/iscsi/iscsi_subsystem.o 00:01:40.090 CC lib/iscsi/iscsi_rpc.o 00:01:40.090 CC lib/iscsi/task.o 00:01:40.090 LIB libspdk_ftl.a 00:01:40.348 SO libspdk_ftl.so.9.0 00:01:40.605 SYMLINK libspdk_ftl.so 00:01:41.170 LIB libspdk_vhost.a 00:01:41.170 SO libspdk_vhost.so.8.0 00:01:41.427 LIB libspdk_nvmf.a 00:01:41.427 SYMLINK libspdk_vhost.so 00:01:41.427 SO libspdk_nvmf.so.18.0 00:01:41.427 LIB libspdk_iscsi.a 00:01:41.427 SO libspdk_iscsi.so.8.0 00:01:41.684 SYMLINK libspdk_nvmf.so 00:01:41.684 SYMLINK libspdk_iscsi.so 00:01:41.942 CC module/vfu_device/vfu_virtio.o 00:01:41.942 CC module/env_dpdk/env_dpdk_rpc.o 00:01:41.942 CC module/vfu_device/vfu_virtio_blk.o 00:01:41.942 CC module/vfu_device/vfu_virtio_scsi.o 00:01:41.942 CC module/vfu_device/vfu_virtio_rpc.o 00:01:41.942 CC module/blob/bdev/blob_bdev.o 00:01:41.942 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:41.942 CC module/accel/ioat/accel_ioat.o 00:01:41.942 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:41.942 CC module/sock/posix/posix.o 00:01:41.942 CC module/accel/error/accel_error.o 00:01:41.942 CC module/accel/iaa/accel_iaa.o 00:01:41.942 CC module/accel/ioat/accel_ioat_rpc.o 00:01:41.942 CC module/accel/error/accel_error_rpc.o 00:01:41.942 CC module/scheduler/gscheduler/gscheduler.o 00:01:41.942 CC module/accel/iaa/accel_iaa_rpc.o 00:01:41.942 CC module/accel/dsa/accel_dsa.o 00:01:41.942 CC module/keyring/file/keyring.o 00:01:41.942 CC module/accel/dsa/accel_dsa_rpc.o 00:01:41.942 CC module/keyring/file/keyring_rpc.o 00:01:42.200 LIB libspdk_env_dpdk_rpc.a 00:01:42.200 SO libspdk_env_dpdk_rpc.so.6.0 00:01:42.200 SYMLINK libspdk_env_dpdk_rpc.so 00:01:42.200 LIB libspdk_keyring_file.a 00:01:42.200 LIB libspdk_scheduler_gscheduler.a 00:01:42.200 LIB libspdk_scheduler_dpdk_governor.a 00:01:42.200 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:42.200 SO libspdk_scheduler_gscheduler.so.4.0 00:01:42.200 SO libspdk_keyring_file.so.1.0 00:01:42.200 LIB libspdk_accel_error.a 00:01:42.200 LIB libspdk_accel_ioat.a 00:01:42.200 LIB libspdk_scheduler_dynamic.a 00:01:42.200 LIB libspdk_accel_iaa.a 00:01:42.200 SO libspdk_accel_error.so.2.0 00:01:42.200 SO libspdk_accel_ioat.so.6.0 00:01:42.200 SO libspdk_scheduler_dynamic.so.4.0 00:01:42.200 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:42.200 SYMLINK libspdk_scheduler_gscheduler.so 00:01:42.200 SYMLINK libspdk_keyring_file.so 00:01:42.200 SO libspdk_accel_iaa.so.3.0 00:01:42.200 LIB libspdk_accel_dsa.a 00:01:42.200 SYMLINK libspdk_accel_error.so 00:01:42.200 SO libspdk_accel_dsa.so.5.0 00:01:42.200 SYMLINK libspdk_scheduler_dynamic.so 00:01:42.200 LIB libspdk_blob_bdev.a 00:01:42.200 SYMLINK libspdk_accel_ioat.so 00:01:42.200 SYMLINK libspdk_accel_iaa.so 00:01:42.457 SO libspdk_blob_bdev.so.11.0 00:01:42.457 SYMLINK libspdk_accel_dsa.so 00:01:42.457 SYMLINK libspdk_blob_bdev.so 00:01:42.457 LIB libspdk_vfu_device.a 00:01:42.715 SO libspdk_vfu_device.so.3.0 00:01:42.715 CC module/bdev/gpt/gpt.o 00:01:42.715 CC module/bdev/lvol/vbdev_lvol.o 00:01:42.715 CC module/bdev/malloc/bdev_malloc.o 00:01:42.715 CC module/bdev/passthru/vbdev_passthru.o 00:01:42.715 CC module/bdev/error/vbdev_error.o 00:01:42.715 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:42.715 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:42.715 CC module/bdev/gpt/vbdev_gpt.o 00:01:42.715 CC module/bdev/error/vbdev_error_rpc.o 00:01:42.715 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:42.715 CC module/bdev/nvme/bdev_nvme.o 00:01:42.715 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:42.715 CC module/bdev/split/vbdev_split.o 00:01:42.715 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:42.715 CC module/bdev/null/bdev_null.o 00:01:42.715 CC module/bdev/iscsi/bdev_iscsi.o 00:01:42.715 CC module/bdev/delay/vbdev_delay.o 00:01:42.715 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:42.715 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:42.715 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:42.715 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:42.715 CC module/bdev/raid/bdev_raid.o 00:01:42.715 CC module/bdev/nvme/nvme_rpc.o 00:01:42.715 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:42.715 CC module/bdev/null/bdev_null_rpc.o 00:01:42.715 CC module/bdev/split/vbdev_split_rpc.o 00:01:42.715 CC module/bdev/raid/bdev_raid_rpc.o 00:01:42.715 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:42.715 CC module/bdev/aio/bdev_aio.o 00:01:42.715 CC module/bdev/raid/bdev_raid_sb.o 00:01:42.715 CC module/blobfs/bdev/blobfs_bdev.o 00:01:42.715 CC module/bdev/nvme/bdev_mdns_client.o 00:01:42.715 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:42.715 CC module/bdev/ftl/bdev_ftl.o 00:01:42.715 CC module/bdev/raid/raid0.o 00:01:42.715 CC module/bdev/nvme/vbdev_opal.o 00:01:42.715 CC module/bdev/aio/bdev_aio_rpc.o 00:01:42.715 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:42.715 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:42.715 CC module/bdev/raid/raid1.o 00:01:42.715 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:42.715 CC module/bdev/raid/concat.o 00:01:42.715 SYMLINK libspdk_vfu_device.so 00:01:42.973 LIB libspdk_sock_posix.a 00:01:42.973 SO libspdk_sock_posix.so.6.0 00:01:42.973 LIB libspdk_blobfs_bdev.a 00:01:42.973 LIB libspdk_bdev_passthru.a 00:01:42.973 SO libspdk_blobfs_bdev.so.6.0 00:01:42.973 SYMLINK libspdk_sock_posix.so 00:01:42.973 LIB libspdk_bdev_split.a 00:01:42.973 SO libspdk_bdev_passthru.so.6.0 00:01:42.973 LIB libspdk_bdev_null.a 00:01:42.973 SO libspdk_bdev_split.so.6.0 00:01:42.973 SYMLINK libspdk_blobfs_bdev.so 00:01:43.232 SO libspdk_bdev_null.so.6.0 00:01:43.232 SYMLINK libspdk_bdev_passthru.so 00:01:43.232 LIB libspdk_bdev_ftl.a 00:01:43.232 LIB libspdk_bdev_gpt.a 00:01:43.232 SYMLINK libspdk_bdev_split.so 00:01:43.232 LIB libspdk_bdev_error.a 00:01:43.232 SO libspdk_bdev_ftl.so.6.0 00:01:43.232 SYMLINK libspdk_bdev_null.so 00:01:43.232 SO libspdk_bdev_gpt.so.6.0 00:01:43.232 LIB libspdk_bdev_zone_block.a 00:01:43.232 SO libspdk_bdev_error.so.6.0 00:01:43.232 LIB libspdk_bdev_aio.a 00:01:43.232 LIB libspdk_bdev_iscsi.a 00:01:43.232 SO libspdk_bdev_zone_block.so.6.0 00:01:43.232 SYMLINK libspdk_bdev_ftl.so 00:01:43.232 SO libspdk_bdev_aio.so.6.0 00:01:43.232 SO libspdk_bdev_iscsi.so.6.0 00:01:43.232 SYMLINK libspdk_bdev_gpt.so 00:01:43.232 LIB libspdk_bdev_delay.a 00:01:43.232 SYMLINK libspdk_bdev_error.so 00:01:43.232 LIB libspdk_bdev_malloc.a 00:01:43.232 SO libspdk_bdev_delay.so.6.0 00:01:43.232 SYMLINK libspdk_bdev_zone_block.so 00:01:43.232 SO libspdk_bdev_malloc.so.6.0 00:01:43.232 SYMLINK libspdk_bdev_aio.so 00:01:43.232 SYMLINK libspdk_bdev_iscsi.so 00:01:43.232 SYMLINK libspdk_bdev_delay.so 00:01:43.232 SYMLINK libspdk_bdev_malloc.so 00:01:43.490 LIB libspdk_bdev_virtio.a 00:01:43.490 LIB libspdk_bdev_lvol.a 00:01:43.490 SO libspdk_bdev_virtio.so.6.0 00:01:43.490 SO libspdk_bdev_lvol.so.6.0 00:01:43.490 SYMLINK libspdk_bdev_virtio.so 00:01:43.490 SYMLINK libspdk_bdev_lvol.so 00:01:43.749 LIB libspdk_bdev_raid.a 00:01:43.749 SO libspdk_bdev_raid.so.6.0 00:01:44.008 SYMLINK libspdk_bdev_raid.so 00:01:44.942 LIB libspdk_bdev_nvme.a 00:01:44.942 SO libspdk_bdev_nvme.so.7.0 00:01:45.200 SYMLINK libspdk_bdev_nvme.so 00:01:45.457 CC module/event/subsystems/keyring/keyring.o 00:01:45.457 CC module/event/subsystems/iobuf/iobuf.o 00:01:45.457 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:45.457 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:45.457 CC module/event/subsystems/scheduler/scheduler.o 00:01:45.457 CC module/event/subsystems/sock/sock.o 00:01:45.457 CC module/event/subsystems/vmd/vmd.o 00:01:45.457 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:45.457 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:45.457 LIB libspdk_event_sock.a 00:01:45.457 LIB libspdk_event_keyring.a 00:01:45.457 LIB libspdk_event_vhost_blk.a 00:01:45.457 LIB libspdk_event_scheduler.a 00:01:45.716 LIB libspdk_event_vfu_tgt.a 00:01:45.716 LIB libspdk_event_vmd.a 00:01:45.716 SO libspdk_event_sock.so.5.0 00:01:45.716 SO libspdk_event_keyring.so.1.0 00:01:45.716 SO libspdk_event_vhost_blk.so.3.0 00:01:45.716 LIB libspdk_event_iobuf.a 00:01:45.716 SO libspdk_event_vfu_tgt.so.3.0 00:01:45.716 SO libspdk_event_scheduler.so.4.0 00:01:45.716 SO libspdk_event_vmd.so.6.0 00:01:45.716 SO libspdk_event_iobuf.so.3.0 00:01:45.716 SYMLINK libspdk_event_sock.so 00:01:45.716 SYMLINK libspdk_event_keyring.so 00:01:45.716 SYMLINK libspdk_event_vhost_blk.so 00:01:45.716 SYMLINK libspdk_event_vfu_tgt.so 00:01:45.716 SYMLINK libspdk_event_scheduler.so 00:01:45.716 SYMLINK libspdk_event_vmd.so 00:01:45.716 SYMLINK libspdk_event_iobuf.so 00:01:45.974 CC module/event/subsystems/accel/accel.o 00:01:45.974 LIB libspdk_event_accel.a 00:01:45.974 SO libspdk_event_accel.so.6.0 00:01:45.974 SYMLINK libspdk_event_accel.so 00:01:46.232 CC module/event/subsystems/bdev/bdev.o 00:01:46.489 LIB libspdk_event_bdev.a 00:01:46.489 SO libspdk_event_bdev.so.6.0 00:01:46.489 SYMLINK libspdk_event_bdev.so 00:01:46.747 CC module/event/subsystems/nbd/nbd.o 00:01:46.747 CC module/event/subsystems/ublk/ublk.o 00:01:46.747 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:46.747 CC module/event/subsystems/scsi/scsi.o 00:01:46.747 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:46.747 LIB libspdk_event_nbd.a 00:01:46.747 LIB libspdk_event_ublk.a 00:01:46.747 LIB libspdk_event_scsi.a 00:01:46.747 SO libspdk_event_nbd.so.6.0 00:01:46.747 SO libspdk_event_ublk.so.3.0 00:01:46.747 SO libspdk_event_scsi.so.6.0 00:01:47.004 SYMLINK libspdk_event_nbd.so 00:01:47.004 SYMLINK libspdk_event_ublk.so 00:01:47.004 SYMLINK libspdk_event_scsi.so 00:01:47.004 LIB libspdk_event_nvmf.a 00:01:47.005 SO libspdk_event_nvmf.so.6.0 00:01:47.005 SYMLINK libspdk_event_nvmf.so 00:01:47.005 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:47.005 CC module/event/subsystems/iscsi/iscsi.o 00:01:47.262 LIB libspdk_event_vhost_scsi.a 00:01:47.262 SO libspdk_event_vhost_scsi.so.3.0 00:01:47.262 LIB libspdk_event_iscsi.a 00:01:47.262 SO libspdk_event_iscsi.so.6.0 00:01:47.262 SYMLINK libspdk_event_vhost_scsi.so 00:01:47.262 SYMLINK libspdk_event_iscsi.so 00:01:47.529 SO libspdk.so.6.0 00:01:47.529 SYMLINK libspdk.so 00:01:47.529 CC app/trace_record/trace_record.o 00:01:47.529 CC app/spdk_nvme_perf/perf.o 00:01:47.529 CC app/spdk_lspci/spdk_lspci.o 00:01:47.529 CXX app/trace/trace.o 00:01:47.529 CC app/spdk_nvme_identify/identify.o 00:01:47.529 CC app/spdk_nvme_discover/discovery_aer.o 00:01:47.529 CC app/spdk_top/spdk_top.o 00:01:47.795 CC test/rpc_client/rpc_client_test.o 00:01:47.795 TEST_HEADER include/spdk/accel.h 00:01:47.795 TEST_HEADER include/spdk/accel_module.h 00:01:47.795 TEST_HEADER include/spdk/assert.h 00:01:47.795 TEST_HEADER include/spdk/barrier.h 00:01:47.795 TEST_HEADER include/spdk/base64.h 00:01:47.795 TEST_HEADER include/spdk/bdev.h 00:01:47.795 TEST_HEADER include/spdk/bdev_module.h 00:01:47.795 TEST_HEADER include/spdk/bdev_zone.h 00:01:47.795 TEST_HEADER include/spdk/bit_array.h 00:01:47.795 TEST_HEADER include/spdk/bit_pool.h 00:01:47.795 TEST_HEADER include/spdk/blob_bdev.h 00:01:47.795 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:47.795 TEST_HEADER include/spdk/blobfs.h 00:01:47.795 TEST_HEADER include/spdk/blob.h 00:01:47.795 TEST_HEADER include/spdk/conf.h 00:01:47.795 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:47.795 TEST_HEADER include/spdk/config.h 00:01:47.795 TEST_HEADER include/spdk/cpuset.h 00:01:47.795 TEST_HEADER include/spdk/crc16.h 00:01:47.795 CC app/nvmf_tgt/nvmf_main.o 00:01:47.795 CC app/spdk_dd/spdk_dd.o 00:01:47.795 TEST_HEADER include/spdk/crc32.h 00:01:47.795 TEST_HEADER include/spdk/crc64.h 00:01:47.795 CC app/iscsi_tgt/iscsi_tgt.o 00:01:47.795 TEST_HEADER include/spdk/dif.h 00:01:47.795 TEST_HEADER include/spdk/dma.h 00:01:47.795 TEST_HEADER include/spdk/endian.h 00:01:47.795 TEST_HEADER include/spdk/env_dpdk.h 00:01:47.795 TEST_HEADER include/spdk/env.h 00:01:47.795 CC app/vhost/vhost.o 00:01:47.795 TEST_HEADER include/spdk/event.h 00:01:47.795 TEST_HEADER include/spdk/fd_group.h 00:01:47.795 TEST_HEADER include/spdk/fd.h 00:01:47.795 TEST_HEADER include/spdk/file.h 00:01:47.795 TEST_HEADER include/spdk/ftl.h 00:01:47.795 TEST_HEADER include/spdk/gpt_spec.h 00:01:47.795 TEST_HEADER include/spdk/hexlify.h 00:01:47.795 CC app/spdk_tgt/spdk_tgt.o 00:01:47.795 TEST_HEADER include/spdk/histogram_data.h 00:01:47.795 CC test/env/vtophys/vtophys.o 00:01:47.795 TEST_HEADER include/spdk/idxd.h 00:01:47.795 CC test/env/memory/memory_ut.o 00:01:47.795 CC test/event/reactor/reactor.o 00:01:47.795 CC examples/util/zipf/zipf.o 00:01:47.795 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:47.795 TEST_HEADER include/spdk/idxd_spec.h 00:01:47.795 CC examples/sock/hello_world/hello_sock.o 00:01:47.795 CC app/fio/nvme/fio_plugin.o 00:01:47.795 CC examples/accel/perf/accel_perf.o 00:01:47.795 CC examples/ioat/perf/perf.o 00:01:47.795 CC test/event/reactor_perf/reactor_perf.o 00:01:47.795 CC test/env/pci/pci_ut.o 00:01:47.795 TEST_HEADER include/spdk/init.h 00:01:47.795 CC examples/idxd/perf/perf.o 00:01:47.795 CC examples/nvme/reconnect/reconnect.o 00:01:47.795 TEST_HEADER include/spdk/ioat.h 00:01:47.795 CC test/event/event_perf/event_perf.o 00:01:47.795 CC examples/nvme/hello_world/hello_world.o 00:01:47.795 TEST_HEADER include/spdk/ioat_spec.h 00:01:47.795 CC examples/ioat/verify/verify.o 00:01:47.795 TEST_HEADER include/spdk/iscsi_spec.h 00:01:47.795 CC test/nvme/aer/aer.o 00:01:47.795 CC examples/vmd/lsvmd/lsvmd.o 00:01:47.795 TEST_HEADER include/spdk/json.h 00:01:47.795 CC test/thread/poller_perf/poller_perf.o 00:01:47.795 TEST_HEADER include/spdk/jsonrpc.h 00:01:47.795 TEST_HEADER include/spdk/keyring.h 00:01:47.795 TEST_HEADER include/spdk/keyring_module.h 00:01:47.795 TEST_HEADER include/spdk/likely.h 00:01:47.795 TEST_HEADER include/spdk/log.h 00:01:47.795 TEST_HEADER include/spdk/lvol.h 00:01:47.795 TEST_HEADER include/spdk/memory.h 00:01:47.795 CC test/event/app_repeat/app_repeat.o 00:01:47.795 TEST_HEADER include/spdk/mmio.h 00:01:47.795 TEST_HEADER include/spdk/nbd.h 00:01:47.795 TEST_HEADER include/spdk/notify.h 00:01:47.795 CC test/dma/test_dma/test_dma.o 00:01:47.795 CC examples/thread/thread/thread_ex.o 00:01:47.795 CC examples/blob/cli/blobcli.o 00:01:47.795 CC test/bdev/bdevio/bdevio.o 00:01:47.795 CC examples/bdev/hello_world/hello_bdev.o 00:01:47.795 TEST_HEADER include/spdk/nvme.h 00:01:47.795 TEST_HEADER include/spdk/nvme_intel.h 00:01:47.795 CC examples/blob/hello_world/hello_blob.o 00:01:47.795 CC test/blobfs/mkfs/mkfs.o 00:01:47.795 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:47.795 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:47.795 TEST_HEADER include/spdk/nvme_spec.h 00:01:47.795 CC examples/bdev/bdevperf/bdevperf.o 00:01:47.795 TEST_HEADER include/spdk/nvme_zns.h 00:01:47.795 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:47.795 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:47.795 CC examples/nvmf/nvmf/nvmf.o 00:01:47.795 TEST_HEADER include/spdk/nvmf.h 00:01:47.795 TEST_HEADER include/spdk/nvmf_spec.h 00:01:47.795 CC test/app/bdev_svc/bdev_svc.o 00:01:47.795 TEST_HEADER include/spdk/nvmf_transport.h 00:01:47.795 TEST_HEADER include/spdk/opal.h 00:01:47.795 TEST_HEADER include/spdk/opal_spec.h 00:01:47.795 TEST_HEADER include/spdk/pci_ids.h 00:01:47.795 CC test/accel/dif/dif.o 00:01:48.057 TEST_HEADER include/spdk/pipe.h 00:01:48.057 TEST_HEADER include/spdk/queue.h 00:01:48.057 TEST_HEADER include/spdk/reduce.h 00:01:48.057 TEST_HEADER include/spdk/rpc.h 00:01:48.057 TEST_HEADER include/spdk/scheduler.h 00:01:48.057 TEST_HEADER include/spdk/scsi.h 00:01:48.057 TEST_HEADER include/spdk/scsi_spec.h 00:01:48.057 TEST_HEADER include/spdk/sock.h 00:01:48.057 TEST_HEADER include/spdk/stdinc.h 00:01:48.057 LINK spdk_lspci 00:01:48.057 TEST_HEADER include/spdk/string.h 00:01:48.057 TEST_HEADER include/spdk/thread.h 00:01:48.057 TEST_HEADER include/spdk/trace.h 00:01:48.057 TEST_HEADER include/spdk/trace_parser.h 00:01:48.057 TEST_HEADER include/spdk/tree.h 00:01:48.057 TEST_HEADER include/spdk/ublk.h 00:01:48.057 TEST_HEADER include/spdk/util.h 00:01:48.057 TEST_HEADER include/spdk/uuid.h 00:01:48.057 CC test/env/mem_callbacks/mem_callbacks.o 00:01:48.057 TEST_HEADER include/spdk/version.h 00:01:48.057 CC test/lvol/esnap/esnap.o 00:01:48.057 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:48.057 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:48.057 TEST_HEADER include/spdk/vhost.h 00:01:48.057 TEST_HEADER include/spdk/vmd.h 00:01:48.058 TEST_HEADER include/spdk/xor.h 00:01:48.058 TEST_HEADER include/spdk/zipf.h 00:01:48.058 CXX test/cpp_headers/accel.o 00:01:48.058 LINK rpc_client_test 00:01:48.058 LINK spdk_nvme_discover 00:01:48.058 LINK reactor 00:01:48.058 LINK interrupt_tgt 00:01:48.058 LINK vtophys 00:01:48.058 LINK lsvmd 00:01:48.058 LINK reactor_perf 00:01:48.058 LINK zipf 00:01:48.058 LINK event_perf 00:01:48.058 LINK nvmf_tgt 00:01:48.058 LINK poller_perf 00:01:48.058 LINK vhost 00:01:48.058 LINK env_dpdk_post_init 00:01:48.058 LINK spdk_trace_record 00:01:48.058 LINK iscsi_tgt 00:01:48.318 LINK app_repeat 00:01:48.318 LINK spdk_tgt 00:01:48.318 LINK ioat_perf 00:01:48.318 LINK verify 00:01:48.318 LINK hello_world 00:01:48.318 LINK bdev_svc 00:01:48.318 LINK hello_sock 00:01:48.318 LINK mkfs 00:01:48.318 LINK hello_blob 00:01:48.318 LINK aer 00:01:48.318 CXX test/cpp_headers/accel_module.o 00:01:48.318 LINK hello_bdev 00:01:48.318 LINK thread 00:01:48.318 CXX test/cpp_headers/assert.o 00:01:48.581 CC test/nvme/reset/reset.o 00:01:48.581 CXX test/cpp_headers/barrier.o 00:01:48.581 LINK idxd_perf 00:01:48.581 LINK spdk_dd 00:01:48.581 LINK reconnect 00:01:48.581 LINK nvmf 00:01:48.581 LINK spdk_trace 00:01:48.581 CC test/event/scheduler/scheduler.o 00:01:48.581 CXX test/cpp_headers/base64.o 00:01:48.581 LINK pci_ut 00:01:48.581 CC test/app/histogram_perf/histogram_perf.o 00:01:48.581 CC examples/vmd/led/led.o 00:01:48.581 CXX test/cpp_headers/bdev.o 00:01:48.581 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:48.581 CC test/nvme/sgl/sgl.o 00:01:48.581 CC examples/nvme/arbitration/arbitration.o 00:01:48.581 LINK test_dma 00:01:48.581 CC test/app/stub/stub.o 00:01:48.581 CC test/app/jsoncat/jsoncat.o 00:01:48.581 CC app/fio/bdev/fio_plugin.o 00:01:48.581 LINK bdevio 00:01:48.581 LINK dif 00:01:48.581 CC examples/nvme/hotplug/hotplug.o 00:01:48.844 LINK accel_perf 00:01:48.844 CC examples/nvme/abort/abort.o 00:01:48.844 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:48.844 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:48.844 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:48.844 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:48.844 CC test/nvme/e2edp/nvme_dp.o 00:01:48.844 CXX test/cpp_headers/bdev_module.o 00:01:48.844 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:48.844 CC test/nvme/overhead/overhead.o 00:01:48.844 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:48.844 CXX test/cpp_headers/bdev_zone.o 00:01:48.844 LINK spdk_nvme 00:01:48.844 LINK blobcli 00:01:48.844 CC test/nvme/startup/startup.o 00:01:48.844 CXX test/cpp_headers/bit_array.o 00:01:48.844 CC test/nvme/err_injection/err_injection.o 00:01:48.844 LINK histogram_perf 00:01:48.844 CXX test/cpp_headers/bit_pool.o 00:01:48.844 LINK led 00:01:48.844 CC test/nvme/reserve/reserve.o 00:01:48.844 CC test/nvme/simple_copy/simple_copy.o 00:01:48.844 CC test/nvme/connect_stress/connect_stress.o 00:01:48.844 LINK jsoncat 00:01:49.108 LINK stub 00:01:49.108 LINK reset 00:01:49.108 CC test/nvme/boot_partition/boot_partition.o 00:01:49.108 CC test/nvme/compliance/nvme_compliance.o 00:01:49.108 CXX test/cpp_headers/blob_bdev.o 00:01:49.108 LINK scheduler 00:01:49.108 CC test/nvme/fused_ordering/fused_ordering.o 00:01:49.108 CXX test/cpp_headers/blobfs_bdev.o 00:01:49.108 CXX test/cpp_headers/blobfs.o 00:01:49.108 CXX test/cpp_headers/blob.o 00:01:49.108 CXX test/cpp_headers/conf.o 00:01:49.108 LINK pmr_persistence 00:01:49.108 CXX test/cpp_headers/config.o 00:01:49.108 LINK cmb_copy 00:01:49.108 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:49.108 LINK sgl 00:01:49.108 CC test/nvme/fdp/fdp.o 00:01:49.108 LINK hotplug 00:01:49.108 CXX test/cpp_headers/cpuset.o 00:01:49.108 CC test/nvme/cuse/cuse.o 00:01:49.108 CXX test/cpp_headers/crc16.o 00:01:49.108 CXX test/cpp_headers/crc32.o 00:01:49.108 LINK mem_callbacks 00:01:49.108 CXX test/cpp_headers/crc64.o 00:01:49.108 LINK startup 00:01:49.367 CXX test/cpp_headers/dif.o 00:01:49.367 CXX test/cpp_headers/dma.o 00:01:49.367 LINK spdk_nvme_perf 00:01:49.367 LINK err_injection 00:01:49.367 CXX test/cpp_headers/endian.o 00:01:49.367 CXX test/cpp_headers/env.o 00:01:49.367 CXX test/cpp_headers/env_dpdk.o 00:01:49.367 LINK arbitration 00:01:49.367 LINK spdk_nvme_identify 00:01:49.367 LINK nvme_dp 00:01:49.367 LINK connect_stress 00:01:49.367 CXX test/cpp_headers/event.o 00:01:49.367 CXX test/cpp_headers/fd_group.o 00:01:49.367 LINK reserve 00:01:49.367 LINK boot_partition 00:01:49.367 LINK overhead 00:01:49.367 LINK simple_copy 00:01:49.367 LINK spdk_top 00:01:49.367 LINK bdevperf 00:01:49.367 CXX test/cpp_headers/fd.o 00:01:49.367 CXX test/cpp_headers/file.o 00:01:49.367 CXX test/cpp_headers/ftl.o 00:01:49.367 CXX test/cpp_headers/gpt_spec.o 00:01:49.367 CXX test/cpp_headers/hexlify.o 00:01:49.367 CXX test/cpp_headers/histogram_data.o 00:01:49.367 CXX test/cpp_headers/idxd.o 00:01:49.367 LINK abort 00:01:49.367 LINK fused_ordering 00:01:49.367 CXX test/cpp_headers/idxd_spec.o 00:01:49.367 CXX test/cpp_headers/init.o 00:01:49.630 CXX test/cpp_headers/ioat.o 00:01:49.630 CXX test/cpp_headers/ioat_spec.o 00:01:49.630 LINK nvme_fuzz 00:01:49.630 LINK nvme_manage 00:01:49.630 CXX test/cpp_headers/iscsi_spec.o 00:01:49.630 LINK doorbell_aers 00:01:49.630 CXX test/cpp_headers/json.o 00:01:49.630 CXX test/cpp_headers/jsonrpc.o 00:01:49.630 CXX test/cpp_headers/keyring.o 00:01:49.630 LINK memory_ut 00:01:49.630 CXX test/cpp_headers/keyring_module.o 00:01:49.630 LINK vhost_fuzz 00:01:49.630 CXX test/cpp_headers/likely.o 00:01:49.630 CXX test/cpp_headers/log.o 00:01:49.630 CXX test/cpp_headers/lvol.o 00:01:49.630 LINK spdk_bdev 00:01:49.631 CXX test/cpp_headers/memory.o 00:01:49.631 CXX test/cpp_headers/mmio.o 00:01:49.631 LINK nvme_compliance 00:01:49.631 CXX test/cpp_headers/nbd.o 00:01:49.631 CXX test/cpp_headers/notify.o 00:01:49.631 CXX test/cpp_headers/nvme.o 00:01:49.631 CXX test/cpp_headers/nvme_intel.o 00:01:49.631 CXX test/cpp_headers/nvme_ocssd.o 00:01:49.631 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:49.631 CXX test/cpp_headers/nvme_spec.o 00:01:49.631 CXX test/cpp_headers/nvme_zns.o 00:01:49.631 CXX test/cpp_headers/nvmf_cmd.o 00:01:49.631 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:49.631 CXX test/cpp_headers/nvmf.o 00:01:49.631 CXX test/cpp_headers/nvmf_spec.o 00:01:49.631 CXX test/cpp_headers/nvmf_transport.o 00:01:49.631 CXX test/cpp_headers/opal.o 00:01:49.889 CXX test/cpp_headers/opal_spec.o 00:01:49.889 CXX test/cpp_headers/pci_ids.o 00:01:49.889 LINK fdp 00:01:49.889 CXX test/cpp_headers/pipe.o 00:01:49.889 CXX test/cpp_headers/queue.o 00:01:49.889 CXX test/cpp_headers/reduce.o 00:01:49.889 CXX test/cpp_headers/rpc.o 00:01:49.889 CXX test/cpp_headers/scheduler.o 00:01:49.889 CXX test/cpp_headers/scsi.o 00:01:49.889 CXX test/cpp_headers/sock.o 00:01:49.889 CXX test/cpp_headers/scsi_spec.o 00:01:49.889 CXX test/cpp_headers/stdinc.o 00:01:49.889 CXX test/cpp_headers/string.o 00:01:49.889 CXX test/cpp_headers/thread.o 00:01:49.889 CXX test/cpp_headers/trace.o 00:01:49.889 CXX test/cpp_headers/trace_parser.o 00:01:49.889 CXX test/cpp_headers/tree.o 00:01:49.889 CXX test/cpp_headers/ublk.o 00:01:49.889 CXX test/cpp_headers/util.o 00:01:49.889 CXX test/cpp_headers/uuid.o 00:01:49.889 CXX test/cpp_headers/version.o 00:01:49.889 CXX test/cpp_headers/vfio_user_pci.o 00:01:49.889 CXX test/cpp_headers/vfio_user_spec.o 00:01:49.889 CXX test/cpp_headers/vhost.o 00:01:49.889 CXX test/cpp_headers/vmd.o 00:01:49.889 CXX test/cpp_headers/xor.o 00:01:49.889 CXX test/cpp_headers/zipf.o 00:01:50.821 LINK cuse 00:01:51.079 LINK iscsi_fuzz 00:01:53.659 LINK esnap 00:01:53.917 00:01:53.917 real 0m47.703s 00:01:53.917 user 10m1.496s 00:01:53.917 sys 2m27.050s 00:01:53.917 00:49:06 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:53.917 00:49:06 make -- common/autotest_common.sh@10 -- $ set +x 00:01:53.917 ************************************ 00:01:53.917 END TEST make 00:01:53.917 ************************************ 00:01:53.917 00:49:06 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:53.917 00:49:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:53.917 00:49:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:53.917 00:49:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.917 00:49:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:53.917 00:49:06 -- pm/common@44 -- $ pid=1036011 00:01:53.917 00:49:06 -- pm/common@50 -- $ kill -TERM 1036011 00:01:53.917 00:49:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.917 00:49:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:53.917 00:49:06 -- pm/common@44 -- $ pid=1036013 00:01:53.917 00:49:06 -- pm/common@50 -- $ kill -TERM 1036013 00:01:53.917 00:49:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.917 00:49:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:53.917 00:49:06 -- pm/common@44 -- $ pid=1036015 00:01:53.917 00:49:06 -- pm/common@50 -- $ kill -TERM 1036015 00:01:53.917 00:49:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.917 00:49:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:53.917 00:49:06 -- pm/common@44 -- $ pid=1036050 00:01:53.917 00:49:06 -- pm/common@50 -- $ sudo -E kill -TERM 1036050 00:01:53.917 00:49:06 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:53.917 00:49:06 -- nvmf/common.sh@7 -- # uname -s 00:01:53.917 00:49:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:53.917 00:49:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:53.917 00:49:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:53.917 00:49:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:53.917 00:49:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:53.917 00:49:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:53.917 00:49:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:53.917 00:49:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:53.917 00:49:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:53.917 00:49:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:53.917 00:49:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:01:53.917 00:49:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:01:53.917 00:49:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:53.917 00:49:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:53.917 00:49:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:53.917 00:49:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:53.917 00:49:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:53.917 00:49:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:53.917 00:49:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:53.917 00:49:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:53.917 00:49:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.917 00:49:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.917 00:49:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.917 00:49:06 -- paths/export.sh@5 -- # export PATH 00:01:53.917 00:49:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.917 00:49:06 -- nvmf/common.sh@47 -- # : 0 00:01:53.917 00:49:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:53.917 00:49:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:53.917 00:49:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:53.917 00:49:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:53.917 00:49:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:53.917 00:49:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:53.917 00:49:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:53.917 00:49:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:53.917 00:49:06 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:53.917 00:49:06 -- spdk/autotest.sh@32 -- # uname -s 00:01:53.917 00:49:06 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:53.917 00:49:06 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:53.917 00:49:06 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:53.917 00:49:06 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:53.917 00:49:06 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:53.917 00:49:06 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:53.917 00:49:06 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:53.917 00:49:06 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:53.917 00:49:06 -- spdk/autotest.sh@48 -- # udevadm_pid=1091239 00:01:53.917 00:49:06 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:53.917 00:49:06 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:53.917 00:49:06 -- pm/common@17 -- # local monitor 00:01:53.917 00:49:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.917 00:49:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.917 00:49:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.917 00:49:06 -- pm/common@21 -- # date +%s 00:01:53.917 00:49:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.917 00:49:06 -- pm/common@21 -- # date +%s 00:01:53.917 00:49:06 -- pm/common@25 -- # sleep 1 00:01:53.917 00:49:06 -- pm/common@21 -- # date +%s 00:01:53.917 00:49:06 -- pm/common@21 -- # date +%s 00:01:53.917 00:49:06 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715726946 00:01:53.917 00:49:06 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715726946 00:01:53.917 00:49:06 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715726946 00:01:53.917 00:49:06 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715726946 00:01:53.917 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715726946_collect-vmstat.pm.log 00:01:53.917 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715726946_collect-cpu-load.pm.log 00:01:53.917 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715726946_collect-cpu-temp.pm.log 00:01:53.917 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715726946_collect-bmc-pm.bmc.pm.log 00:01:54.850 00:49:07 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:54.850 00:49:07 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:54.850 00:49:07 -- common/autotest_common.sh@720 -- # xtrace_disable 00:01:54.850 00:49:07 -- common/autotest_common.sh@10 -- # set +x 00:01:55.106 00:49:07 -- spdk/autotest.sh@59 -- # create_test_list 00:01:55.106 00:49:07 -- common/autotest_common.sh@744 -- # xtrace_disable 00:01:55.106 00:49:07 -- common/autotest_common.sh@10 -- # set +x 00:01:55.106 00:49:07 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:55.106 00:49:07 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:55.106 00:49:07 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:55.106 00:49:07 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:55.106 00:49:07 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:55.106 00:49:07 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:55.106 00:49:07 -- common/autotest_common.sh@1451 -- # uname 00:01:55.106 00:49:07 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:01:55.106 00:49:07 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:55.106 00:49:07 -- common/autotest_common.sh@1471 -- # uname 00:01:55.106 00:49:07 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:01:55.106 00:49:07 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:55.106 00:49:07 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:55.106 00:49:07 -- spdk/autotest.sh@72 -- # hash lcov 00:01:55.106 00:49:07 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:55.106 00:49:07 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:55.106 --rc lcov_branch_coverage=1 00:01:55.106 --rc lcov_function_coverage=1 00:01:55.106 --rc genhtml_branch_coverage=1 00:01:55.106 --rc genhtml_function_coverage=1 00:01:55.106 --rc genhtml_legend=1 00:01:55.106 --rc geninfo_all_blocks=1 00:01:55.106 ' 00:01:55.106 00:49:07 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:55.106 --rc lcov_branch_coverage=1 00:01:55.106 --rc lcov_function_coverage=1 00:01:55.106 --rc genhtml_branch_coverage=1 00:01:55.106 --rc genhtml_function_coverage=1 00:01:55.106 --rc genhtml_legend=1 00:01:55.106 --rc geninfo_all_blocks=1 00:01:55.106 ' 00:01:55.106 00:49:07 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:55.106 --rc lcov_branch_coverage=1 00:01:55.106 --rc lcov_function_coverage=1 00:01:55.106 --rc genhtml_branch_coverage=1 00:01:55.106 --rc genhtml_function_coverage=1 00:01:55.106 --rc genhtml_legend=1 00:01:55.106 --rc geninfo_all_blocks=1 00:01:55.106 --no-external' 00:01:55.106 00:49:07 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:55.106 --rc lcov_branch_coverage=1 00:01:55.106 --rc lcov_function_coverage=1 00:01:55.106 --rc genhtml_branch_coverage=1 00:01:55.106 --rc genhtml_function_coverage=1 00:01:55.106 --rc genhtml_legend=1 00:01:55.106 --rc geninfo_all_blocks=1 00:01:55.106 --no-external' 00:01:55.106 00:49:07 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:55.106 lcov: LCOV version 1.14 00:01:55.106 00:49:07 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:07.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:07.291 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:08.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:08.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:08.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:08.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:08.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:08.660 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:26.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:26.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:26.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:26.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:26.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:26.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:26.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:26.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:26.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:26.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:26.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:26.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:26.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:26.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:26.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:26.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:26.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:26.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:26.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:26.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:26.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:26.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:26.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:26.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:26.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:26.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:26.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:26.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:26.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:26.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:26.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:26.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:26.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:26.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:26.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:26.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:26.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:26.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:26.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:26.756 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:26.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:26.757 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:26.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:26.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:26.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:26.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:26.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:26.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:26.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:26.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:26.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:26.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:26.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:26.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:26.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:26.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:26.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:26.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:26.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:26.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:26.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:26.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:26.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:26.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:26.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:26.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:26.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:26.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:26.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:26.758 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:27.692 00:49:39 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:27.692 00:49:39 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:27.692 00:49:39 -- common/autotest_common.sh@10 -- # set +x 00:02:27.692 00:49:39 -- spdk/autotest.sh@91 -- # rm -f 00:02:27.692 00:49:39 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:29.067 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:02:29.067 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:29.067 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:29.067 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:29.067 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:29.067 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:29.067 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:29.067 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:29.067 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:29.067 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:29.067 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:29.067 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:29.067 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:29.067 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:29.067 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:29.067 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:29.067 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:29.330 00:49:41 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:29.330 00:49:41 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:29.330 00:49:41 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:29.330 00:49:41 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:29.330 00:49:41 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:29.330 00:49:41 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:29.330 00:49:41 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:29.330 00:49:41 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:29.330 00:49:41 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:29.330 00:49:41 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:29.330 00:49:41 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:29.330 00:49:41 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:29.330 00:49:41 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:29.330 00:49:41 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:29.330 00:49:41 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:29.330 No valid GPT data, bailing 00:02:29.330 00:49:41 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:29.330 00:49:41 -- scripts/common.sh@391 -- # pt= 00:02:29.330 00:49:41 -- scripts/common.sh@392 -- # return 1 00:02:29.330 00:49:41 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:29.330 1+0 records in 00:02:29.330 1+0 records out 00:02:29.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00199651 s, 525 MB/s 00:02:29.330 00:49:41 -- spdk/autotest.sh@118 -- # sync 00:02:29.330 00:49:41 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:29.330 00:49:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:29.330 00:49:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:31.269 00:49:43 -- spdk/autotest.sh@124 -- # uname -s 00:02:31.269 00:49:43 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:31.269 00:49:43 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:31.269 00:49:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:31.269 00:49:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:31.269 00:49:43 -- common/autotest_common.sh@10 -- # set +x 00:02:31.269 ************************************ 00:02:31.269 START TEST setup.sh 00:02:31.269 ************************************ 00:02:31.269 00:49:43 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:31.269 * Looking for test storage... 00:02:31.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:31.269 00:49:43 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:31.269 00:49:43 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:31.269 00:49:43 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:31.269 00:49:43 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:31.269 00:49:43 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:31.269 00:49:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:31.269 ************************************ 00:02:31.269 START TEST acl 00:02:31.269 ************************************ 00:02:31.269 00:49:43 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:31.269 * Looking for test storage... 00:02:31.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:31.269 00:49:43 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:31.269 00:49:43 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:31.269 00:49:43 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:31.269 00:49:43 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:31.269 00:49:43 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:31.269 00:49:43 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:31.269 00:49:43 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:31.269 00:49:43 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:31.269 00:49:43 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:31.269 00:49:43 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:31.269 00:49:43 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:31.269 00:49:43 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:31.269 00:49:43 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:31.269 00:49:43 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:31.269 00:49:43 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:31.269 00:49:43 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:33.170 00:49:45 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:33.170 00:49:45 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:33.170 00:49:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:33.170 00:49:45 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:33.170 00:49:45 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:33.170 00:49:45 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:34.544 Hugepages 00:02:34.544 node hugesize free / total 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.544 00:02:34.544 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:34.544 00:49:46 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:34.544 00:49:46 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:34.544 00:49:46 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:34.544 00:49:46 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:34.544 ************************************ 00:02:34.544 START TEST denied 00:02:34.544 ************************************ 00:02:34.544 00:49:46 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:02:34.544 00:49:46 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:02:34.544 00:49:46 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:34.544 00:49:46 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:02:34.544 00:49:46 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:34.544 00:49:46 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:36.442 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:02:36.442 00:49:48 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:02:36.442 00:49:48 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:36.442 00:49:48 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:36.442 00:49:48 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:02:36.442 00:49:48 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:02:36.442 00:49:48 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:36.442 00:49:48 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:36.442 00:49:48 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:36.442 00:49:48 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:36.443 00:49:48 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:38.971 00:02:38.971 real 0m3.995s 00:02:38.971 user 0m1.180s 00:02:38.971 sys 0m1.957s 00:02:38.971 00:49:50 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:38.971 00:49:50 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:38.971 ************************************ 00:02:38.971 END TEST denied 00:02:38.971 ************************************ 00:02:38.971 00:49:50 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:38.971 00:49:50 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:38.971 00:49:50 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:38.971 00:49:50 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:38.971 ************************************ 00:02:38.971 START TEST allowed 00:02:38.971 ************************************ 00:02:38.971 00:49:50 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:02:38.971 00:49:50 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:02:38.971 00:49:50 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:38.971 00:49:50 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:02:38.971 00:49:50 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:38.971 00:49:50 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:41.500 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:41.500 00:49:53 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:41.500 00:49:53 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:41.500 00:49:53 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:41.500 00:49:53 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:41.500 00:49:53 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:42.873 00:02:42.873 real 0m4.080s 00:02:42.873 user 0m1.148s 00:02:42.873 sys 0m1.874s 00:02:42.873 00:49:54 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:42.873 00:49:54 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:42.873 ************************************ 00:02:42.873 END TEST allowed 00:02:42.873 ************************************ 00:02:42.873 00:02:42.873 real 0m11.381s 00:02:42.873 user 0m3.562s 00:02:42.873 sys 0m5.993s 00:02:42.873 00:49:54 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:42.873 00:49:54 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:42.873 ************************************ 00:02:42.873 END TEST acl 00:02:42.873 ************************************ 00:02:42.873 00:49:54 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:42.873 00:49:54 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:42.873 00:49:54 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:42.873 00:49:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:42.873 ************************************ 00:02:42.873 START TEST hugepages 00:02:42.873 ************************************ 00:02:42.873 00:49:54 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:42.873 * Looking for test storage... 00:02:42.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 35453520 kB' 'MemAvailable: 40140500 kB' 'Buffers: 2696 kB' 'Cached: 18426676 kB' 'SwapCached: 0 kB' 'Active: 14416620 kB' 'Inactive: 4470784 kB' 'Active(anon): 13827460 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 461364 kB' 'Mapped: 214616 kB' 'Shmem: 13369428 kB' 'KReclaimable: 241320 kB' 'Slab: 632616 kB' 'SReclaimable: 241320 kB' 'SUnreclaim: 391296 kB' 'KernelStack: 12976 kB' 'PageTables: 9256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562316 kB' 'Committed_AS: 14956044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198860 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2727516 kB' 'DirectMap2M: 19212288 kB' 'DirectMap1G: 47185920 kB' 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.873 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.874 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:42.875 00:49:55 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:42.875 00:49:55 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:42.875 00:49:55 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:42.875 00:49:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:42.875 ************************************ 00:02:42.875 START TEST default_setup 00:02:42.875 ************************************ 00:02:42.875 00:49:55 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:02:42.875 00:49:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:42.875 00:49:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:42.875 00:49:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:42.875 00:49:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:42.875 00:49:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:42.875 00:49:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:42.875 00:49:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:42.875 00:49:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:42.875 00:49:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:42.875 00:49:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:42.875 00:49:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:42.875 00:49:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:42.875 00:49:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:42.875 00:49:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:42.875 00:49:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:42.875 00:49:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:42.875 00:49:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:42.875 00:49:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:42.875 00:49:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:42.875 00:49:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:42.875 00:49:55 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:42.875 00:49:55 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:44.248 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:44.248 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:44.248 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:44.248 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:44.248 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:44.248 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:44.248 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:44.248 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:44.248 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:44.248 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:44.248 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:44.248 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:44.248 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:44.248 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:44.248 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:44.248 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:45.182 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37582580 kB' 'MemAvailable: 42269544 kB' 'Buffers: 2696 kB' 'Cached: 18426772 kB' 'SwapCached: 0 kB' 'Active: 14435544 kB' 'Inactive: 4470784 kB' 'Active(anon): 13846384 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479948 kB' 'Mapped: 214576 kB' 'Shmem: 13369524 kB' 'KReclaimable: 241288 kB' 'Slab: 631920 kB' 'SReclaimable: 241288 kB' 'SUnreclaim: 390632 kB' 'KernelStack: 13072 kB' 'PageTables: 9500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14978956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198940 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2727516 kB' 'DirectMap2M: 19212288 kB' 'DirectMap1G: 47185920 kB' 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.446 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37586572 kB' 'MemAvailable: 42273536 kB' 'Buffers: 2696 kB' 'Cached: 18426772 kB' 'SwapCached: 0 kB' 'Active: 14435748 kB' 'Inactive: 4470784 kB' 'Active(anon): 13846588 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480296 kB' 'Mapped: 214580 kB' 'Shmem: 13369524 kB' 'KReclaimable: 241288 kB' 'Slab: 631920 kB' 'SReclaimable: 241288 kB' 'SUnreclaim: 390632 kB' 'KernelStack: 12800 kB' 'PageTables: 8760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14978976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198892 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2727516 kB' 'DirectMap2M: 19212288 kB' 'DirectMap1G: 47185920 kB' 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.447 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.448 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37586688 kB' 'MemAvailable: 42273652 kB' 'Buffers: 2696 kB' 'Cached: 18426776 kB' 'SwapCached: 0 kB' 'Active: 14434728 kB' 'Inactive: 4470784 kB' 'Active(anon): 13845568 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479340 kB' 'Mapped: 214644 kB' 'Shmem: 13369528 kB' 'KReclaimable: 241288 kB' 'Slab: 632008 kB' 'SReclaimable: 241288 kB' 'SUnreclaim: 390720 kB' 'KernelStack: 12928 kB' 'PageTables: 8984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14978996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198924 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2727516 kB' 'DirectMap2M: 19212288 kB' 'DirectMap1G: 47185920 kB' 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.449 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.450 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:45.451 nr_hugepages=1024 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:45.451 resv_hugepages=0 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:45.451 surplus_hugepages=0 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:45.451 anon_hugepages=0 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37587916 kB' 'MemAvailable: 42274880 kB' 'Buffers: 2696 kB' 'Cached: 18426812 kB' 'SwapCached: 0 kB' 'Active: 14435092 kB' 'Inactive: 4470784 kB' 'Active(anon): 13845932 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479648 kB' 'Mapped: 214644 kB' 'Shmem: 13369564 kB' 'KReclaimable: 241288 kB' 'Slab: 632004 kB' 'SReclaimable: 241288 kB' 'SUnreclaim: 390716 kB' 'KernelStack: 12928 kB' 'PageTables: 8984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14979016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198924 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2727516 kB' 'DirectMap2M: 19212288 kB' 'DirectMap1G: 47185920 kB' 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.451 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.452 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20890536 kB' 'MemUsed: 11939348 kB' 'SwapCached: 0 kB' 'Active: 8408264 kB' 'Inactive: 187456 kB' 'Active(anon): 8012108 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 187456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8358372 kB' 'Mapped: 102464 kB' 'AnonPages: 240496 kB' 'Shmem: 7774760 kB' 'KernelStack: 6600 kB' 'PageTables: 3924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118044 kB' 'Slab: 330652 kB' 'SReclaimable: 118044 kB' 'SUnreclaim: 212608 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.453 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:45.454 node0=1024 expecting 1024 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:45.454 00:02:45.454 real 0m2.644s 00:02:45.454 user 0m0.719s 00:02:45.454 sys 0m0.912s 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:45.454 00:49:57 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:45.454 ************************************ 00:02:45.454 END TEST default_setup 00:02:45.454 ************************************ 00:02:45.454 00:49:57 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:45.454 00:49:57 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:45.454 00:49:57 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:45.454 00:49:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:45.454 ************************************ 00:02:45.454 START TEST per_node_1G_alloc 00:02:45.454 ************************************ 00:02:45.454 00:49:57 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:02:45.454 00:49:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:45.454 00:49:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:45.454 00:49:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:45.454 00:49:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:45.455 00:49:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:45.455 00:49:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:45.455 00:49:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:45.455 00:49:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:45.455 00:49:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:45.455 00:49:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:45.455 00:49:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:45.455 00:49:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:45.455 00:49:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:45.455 00:49:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:45.455 00:49:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:45.455 00:49:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:45.455 00:49:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:45.455 00:49:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:45.455 00:49:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:45.455 00:49:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:45.455 00:49:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:45.455 00:49:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:45.455 00:49:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:45.455 00:49:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:45.455 00:49:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:45.455 00:49:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:45.455 00:49:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:46.830 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:46.830 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:46.830 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:46.830 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:46.830 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:46.830 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:46.830 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:46.830 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:46.830 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:46.830 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:46.830 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:46.830 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:46.830 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:46.830 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:46.830 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:46.830 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:46.830 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37604624 kB' 'MemAvailable: 42291588 kB' 'Buffers: 2696 kB' 'Cached: 18426892 kB' 'SwapCached: 0 kB' 'Active: 14435848 kB' 'Inactive: 4470784 kB' 'Active(anon): 13846688 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480472 kB' 'Mapped: 214748 kB' 'Shmem: 13369644 kB' 'KReclaimable: 241288 kB' 'Slab: 632020 kB' 'SReclaimable: 241288 kB' 'SUnreclaim: 390732 kB' 'KernelStack: 12928 kB' 'PageTables: 9068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14979188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198892 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2727516 kB' 'DirectMap2M: 19212288 kB' 'DirectMap1G: 47185920 kB' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.112 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37610968 kB' 'MemAvailable: 42297932 kB' 'Buffers: 2696 kB' 'Cached: 18426896 kB' 'SwapCached: 0 kB' 'Active: 14436180 kB' 'Inactive: 4470784 kB' 'Active(anon): 13847020 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480816 kB' 'Mapped: 214748 kB' 'Shmem: 13369648 kB' 'KReclaimable: 241288 kB' 'Slab: 631992 kB' 'SReclaimable: 241288 kB' 'SUnreclaim: 390704 kB' 'KernelStack: 12912 kB' 'PageTables: 9008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14979208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198860 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2727516 kB' 'DirectMap2M: 19212288 kB' 'DirectMap1G: 47185920 kB' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.113 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37610808 kB' 'MemAvailable: 42297772 kB' 'Buffers: 2696 kB' 'Cached: 18426908 kB' 'SwapCached: 0 kB' 'Active: 14436260 kB' 'Inactive: 4470784 kB' 'Active(anon): 13847100 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480828 kB' 'Mapped: 214732 kB' 'Shmem: 13369660 kB' 'KReclaimable: 241288 kB' 'Slab: 631992 kB' 'SReclaimable: 241288 kB' 'SUnreclaim: 390704 kB' 'KernelStack: 12944 kB' 'PageTables: 8996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14979228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198860 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2727516 kB' 'DirectMap2M: 19212288 kB' 'DirectMap1G: 47185920 kB' 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.114 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.115 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:47.116 nr_hugepages=1024 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:47.116 resv_hugepages=0 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:47.116 surplus_hugepages=0 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:47.116 anon_hugepages=0 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37613572 kB' 'MemAvailable: 42300536 kB' 'Buffers: 2696 kB' 'Cached: 18426916 kB' 'SwapCached: 0 kB' 'Active: 14435568 kB' 'Inactive: 4470784 kB' 'Active(anon): 13846408 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480112 kB' 'Mapped: 214656 kB' 'Shmem: 13369668 kB' 'KReclaimable: 241288 kB' 'Slab: 631984 kB' 'SReclaimable: 241288 kB' 'SUnreclaim: 390696 kB' 'KernelStack: 12944 kB' 'PageTables: 8992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14979252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198860 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2727516 kB' 'DirectMap2M: 19212288 kB' 'DirectMap1G: 47185920 kB' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.116 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21956120 kB' 'MemUsed: 10873764 kB' 'SwapCached: 0 kB' 'Active: 8408568 kB' 'Inactive: 187456 kB' 'Active(anon): 8012412 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 187456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8358404 kB' 'Mapped: 102476 kB' 'AnonPages: 240808 kB' 'Shmem: 7774792 kB' 'KernelStack: 6600 kB' 'PageTables: 3980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118044 kB' 'Slab: 330700 kB' 'SReclaimable: 118044 kB' 'SUnreclaim: 212656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.117 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 15657136 kB' 'MemUsed: 12054708 kB' 'SwapCached: 0 kB' 'Active: 6027164 kB' 'Inactive: 4283328 kB' 'Active(anon): 5834160 kB' 'Inactive(anon): 0 kB' 'Active(file): 193004 kB' 'Inactive(file): 4283328 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10071212 kB' 'Mapped: 112180 kB' 'AnonPages: 239456 kB' 'Shmem: 5594880 kB' 'KernelStack: 6344 kB' 'PageTables: 5012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 123244 kB' 'Slab: 301284 kB' 'SReclaimable: 123244 kB' 'SUnreclaim: 178040 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.118 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.119 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.119 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.119 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.119 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.119 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.119 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.119 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.119 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.119 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.119 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:47.119 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:47.119 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:47.119 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:47.119 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:47.119 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:47.119 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:47.119 node0=512 expecting 512 00:02:47.119 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:47.119 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:47.119 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:47.119 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:47.119 node1=512 expecting 512 00:02:47.119 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:47.119 00:02:47.119 real 0m1.635s 00:02:47.119 user 0m0.689s 00:02:47.119 sys 0m0.913s 00:02:47.119 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:47.119 00:49:59 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:47.119 ************************************ 00:02:47.119 END TEST per_node_1G_alloc 00:02:47.119 ************************************ 00:02:47.119 00:49:59 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:47.119 00:49:59 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:47.119 00:49:59 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:47.119 00:49:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:47.377 ************************************ 00:02:47.377 START TEST even_2G_alloc 00:02:47.377 ************************************ 00:02:47.377 00:49:59 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:02:47.377 00:49:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:47.377 00:49:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:47.377 00:49:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:47.377 00:49:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:47.377 00:49:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:47.377 00:49:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:47.377 00:49:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:47.377 00:49:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:47.377 00:49:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:47.377 00:49:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:47.377 00:49:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:47.377 00:49:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:47.377 00:49:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:47.377 00:49:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:47.377 00:49:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:47.377 00:49:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:47.377 00:49:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:47.377 00:49:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:47.377 00:49:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:47.377 00:49:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:47.377 00:49:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:47.377 00:49:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:47.377 00:49:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:47.377 00:49:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:47.377 00:49:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:47.377 00:49:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:47.377 00:49:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:47.377 00:49:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:48.758 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:48.758 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:48.758 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:48.758 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:48.758 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:48.758 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:48.758 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:48.758 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:48.758 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:48.758 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:48.758 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:48.758 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:48.758 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:48.758 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:48.758 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:48.759 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:48.759 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37604388 kB' 'MemAvailable: 42291352 kB' 'Buffers: 2696 kB' 'Cached: 18427024 kB' 'SwapCached: 0 kB' 'Active: 14438136 kB' 'Inactive: 4470784 kB' 'Active(anon): 13848976 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482872 kB' 'Mapped: 214692 kB' 'Shmem: 13369776 kB' 'KReclaimable: 241288 kB' 'Slab: 632012 kB' 'SReclaimable: 241288 kB' 'SUnreclaim: 390724 kB' 'KernelStack: 13248 kB' 'PageTables: 9888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14981840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199276 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2727516 kB' 'DirectMap2M: 19212288 kB' 'DirectMap1G: 47185920 kB' 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.759 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37607736 kB' 'MemAvailable: 42294700 kB' 'Buffers: 2696 kB' 'Cached: 18427024 kB' 'SwapCached: 0 kB' 'Active: 14437568 kB' 'Inactive: 4470784 kB' 'Active(anon): 13848408 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481900 kB' 'Mapped: 214700 kB' 'Shmem: 13369776 kB' 'KReclaimable: 241288 kB' 'Slab: 632028 kB' 'SReclaimable: 241288 kB' 'SUnreclaim: 390740 kB' 'KernelStack: 13072 kB' 'PageTables: 9052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14979484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199052 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2727516 kB' 'DirectMap2M: 19212288 kB' 'DirectMap1G: 47185920 kB' 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.760 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.761 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37608140 kB' 'MemAvailable: 42295104 kB' 'Buffers: 2696 kB' 'Cached: 18427040 kB' 'SwapCached: 0 kB' 'Active: 14436684 kB' 'Inactive: 4470784 kB' 'Active(anon): 13847524 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481016 kB' 'Mapped: 214668 kB' 'Shmem: 13369792 kB' 'KReclaimable: 241288 kB' 'Slab: 632084 kB' 'SReclaimable: 241288 kB' 'SUnreclaim: 390796 kB' 'KernelStack: 12992 kB' 'PageTables: 8912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14979504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199036 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2727516 kB' 'DirectMap2M: 19212288 kB' 'DirectMap1G: 47185920 kB' 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.762 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.763 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:48.764 nr_hugepages=1024 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:48.764 resv_hugepages=0 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:48.764 surplus_hugepages=0 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:48.764 anon_hugepages=0 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37608140 kB' 'MemAvailable: 42295104 kB' 'Buffers: 2696 kB' 'Cached: 18427068 kB' 'SwapCached: 0 kB' 'Active: 14436904 kB' 'Inactive: 4470784 kB' 'Active(anon): 13847744 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481196 kB' 'Mapped: 214668 kB' 'Shmem: 13369820 kB' 'KReclaimable: 241288 kB' 'Slab: 632108 kB' 'SReclaimable: 241288 kB' 'SUnreclaim: 390820 kB' 'KernelStack: 12992 kB' 'PageTables: 8936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14979528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199036 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2727516 kB' 'DirectMap2M: 19212288 kB' 'DirectMap1G: 47185920 kB' 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.764 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.765 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21956016 kB' 'MemUsed: 10873868 kB' 'SwapCached: 0 kB' 'Active: 8409120 kB' 'Inactive: 187456 kB' 'Active(anon): 8012964 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 187456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8358508 kB' 'Mapped: 102488 kB' 'AnonPages: 241252 kB' 'Shmem: 7774896 kB' 'KernelStack: 6632 kB' 'PageTables: 3920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118044 kB' 'Slab: 330696 kB' 'SReclaimable: 118044 kB' 'SUnreclaim: 212652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.766 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.767 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 15652324 kB' 'MemUsed: 12059520 kB' 'SwapCached: 0 kB' 'Active: 6028008 kB' 'Inactive: 4283328 kB' 'Active(anon): 5835004 kB' 'Inactive(anon): 0 kB' 'Active(file): 193004 kB' 'Inactive(file): 4283328 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10071292 kB' 'Mapped: 112180 kB' 'AnonPages: 240164 kB' 'Shmem: 5594960 kB' 'KernelStack: 6344 kB' 'PageTables: 4968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 123244 kB' 'Slab: 301412 kB' 'SReclaimable: 123244 kB' 'SUnreclaim: 178168 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.026 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:49.027 node0=512 expecting 512 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:49.027 node1=512 expecting 512 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:49.027 00:02:49.027 real 0m1.652s 00:02:49.027 user 0m0.644s 00:02:49.027 sys 0m0.973s 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:49.027 00:50:01 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:49.027 ************************************ 00:02:49.027 END TEST even_2G_alloc 00:02:49.027 ************************************ 00:02:49.027 00:50:01 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:49.027 00:50:01 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:49.027 00:50:01 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:49.027 00:50:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:49.027 ************************************ 00:02:49.027 START TEST odd_alloc 00:02:49.027 ************************************ 00:02:49.027 00:50:01 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:02:49.027 00:50:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:49.027 00:50:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:49.027 00:50:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:49.027 00:50:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:49.027 00:50:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:49.027 00:50:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:49.027 00:50:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:49.027 00:50:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:49.027 00:50:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:49.027 00:50:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:49.027 00:50:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:49.027 00:50:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:49.027 00:50:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:49.027 00:50:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:49.027 00:50:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:49.027 00:50:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:49.027 00:50:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:49.027 00:50:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:49.027 00:50:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:49.027 00:50:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:49.027 00:50:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:49.027 00:50:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:49.027 00:50:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:49.028 00:50:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:49.028 00:50:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:49.028 00:50:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:49.028 00:50:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:49.028 00:50:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:50.404 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:50.404 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:50.404 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:50.404 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:50.404 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:50.404 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:50.404 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:50.404 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:50.404 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:50.404 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:50.404 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:50.404 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:50.404 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:50.404 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:50.404 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:50.404 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:50.404 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37590092 kB' 'MemAvailable: 42277040 kB' 'Buffers: 2696 kB' 'Cached: 18427160 kB' 'SwapCached: 0 kB' 'Active: 14430216 kB' 'Inactive: 4470784 kB' 'Active(anon): 13841056 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474380 kB' 'Mapped: 213956 kB' 'Shmem: 13369912 kB' 'KReclaimable: 241256 kB' 'Slab: 631840 kB' 'SReclaimable: 241256 kB' 'SUnreclaim: 390584 kB' 'KernelStack: 12832 kB' 'PageTables: 8228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14952584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198892 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2727516 kB' 'DirectMap2M: 19212288 kB' 'DirectMap1G: 47185920 kB' 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.404 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37589840 kB' 'MemAvailable: 42276788 kB' 'Buffers: 2696 kB' 'Cached: 18427164 kB' 'SwapCached: 0 kB' 'Active: 14430452 kB' 'Inactive: 4470784 kB' 'Active(anon): 13841292 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474664 kB' 'Mapped: 213956 kB' 'Shmem: 13369916 kB' 'KReclaimable: 241256 kB' 'Slab: 631796 kB' 'SReclaimable: 241256 kB' 'SUnreclaim: 390540 kB' 'KernelStack: 12848 kB' 'PageTables: 8244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14952600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198860 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2727516 kB' 'DirectMap2M: 19212288 kB' 'DirectMap1G: 47185920 kB' 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.405 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.406 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:50.407 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37589840 kB' 'MemAvailable: 42276788 kB' 'Buffers: 2696 kB' 'Cached: 18427180 kB' 'SwapCached: 0 kB' 'Active: 14429840 kB' 'Inactive: 4470784 kB' 'Active(anon): 13840680 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474024 kB' 'Mapped: 213820 kB' 'Shmem: 13369932 kB' 'KReclaimable: 241256 kB' 'Slab: 631812 kB' 'SReclaimable: 241256 kB' 'SUnreclaim: 390556 kB' 'KernelStack: 12896 kB' 'PageTables: 8284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14952624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198844 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2727516 kB' 'DirectMap2M: 19212288 kB' 'DirectMap1G: 47185920 kB' 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.670 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:50.671 nr_hugepages=1025 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:50.671 resv_hugepages=0 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:50.671 surplus_hugepages=0 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:50.671 anon_hugepages=0 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:50.671 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37590092 kB' 'MemAvailable: 42277040 kB' 'Buffers: 2696 kB' 'Cached: 18427200 kB' 'SwapCached: 0 kB' 'Active: 14429828 kB' 'Inactive: 4470784 kB' 'Active(anon): 13840668 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473988 kB' 'Mapped: 213820 kB' 'Shmem: 13369952 kB' 'KReclaimable: 241256 kB' 'Slab: 631812 kB' 'SReclaimable: 241256 kB' 'SUnreclaim: 390556 kB' 'KernelStack: 12880 kB' 'PageTables: 8236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14952644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198844 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2727516 kB' 'DirectMap2M: 19212288 kB' 'DirectMap1G: 47185920 kB' 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21943648 kB' 'MemUsed: 10886236 kB' 'SwapCached: 0 kB' 'Active: 8407456 kB' 'Inactive: 187456 kB' 'Active(anon): 8011300 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 187456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8358504 kB' 'Mapped: 101728 kB' 'AnonPages: 239584 kB' 'Shmem: 7774892 kB' 'KernelStack: 6616 kB' 'PageTables: 3932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118044 kB' 'Slab: 330748 kB' 'SReclaimable: 118044 kB' 'SUnreclaim: 212704 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.673 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.674 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 15646444 kB' 'MemUsed: 12065400 kB' 'SwapCached: 0 kB' 'Active: 6022244 kB' 'Inactive: 4283328 kB' 'Active(anon): 5829240 kB' 'Inactive(anon): 0 kB' 'Active(file): 193004 kB' 'Inactive(file): 4283328 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10071432 kB' 'Mapped: 112092 kB' 'AnonPages: 234224 kB' 'Shmem: 5595100 kB' 'KernelStack: 6264 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 123212 kB' 'Slab: 301064 kB' 'SReclaimable: 123212 kB' 'SUnreclaim: 177852 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.675 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:50.676 node0=512 expecting 513 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:50.676 node1=513 expecting 512 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:50.676 00:02:50.676 real 0m1.680s 00:02:50.676 user 0m0.743s 00:02:50.676 sys 0m0.903s 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:50.676 00:50:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:50.676 ************************************ 00:02:50.676 END TEST odd_alloc 00:02:50.676 ************************************ 00:02:50.676 00:50:02 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:50.676 00:50:02 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:50.676 00:50:02 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:50.676 00:50:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:50.676 ************************************ 00:02:50.676 START TEST custom_alloc 00:02:50.676 ************************************ 00:02:50.676 00:50:02 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:02:50.676 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:02:50.676 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:02:50.676 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:50.676 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:50.676 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:50.676 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:50.676 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:50.676 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:50.676 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:50.676 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:50.676 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:50.676 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:50.676 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:50.676 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:50.676 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:50.676 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:50.676 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:50.676 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:50.676 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:50.676 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:50.676 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:50.676 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:02:50.676 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:50.676 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:50.677 00:50:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:52.051 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:52.051 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:52.051 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:52.051 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:52.051 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:52.051 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:52.051 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:52.051 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:52.051 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:52.051 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:52.051 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:52.051 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:52.051 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:52.051 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:52.051 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:52.051 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:52.051 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36522724 kB' 'MemAvailable: 41209672 kB' 'Buffers: 2696 kB' 'Cached: 18427292 kB' 'SwapCached: 0 kB' 'Active: 14430652 kB' 'Inactive: 4470784 kB' 'Active(anon): 13841492 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474608 kB' 'Mapped: 213848 kB' 'Shmem: 13370044 kB' 'KReclaimable: 241256 kB' 'Slab: 632012 kB' 'SReclaimable: 241256 kB' 'SUnreclaim: 390756 kB' 'KernelStack: 12976 kB' 'PageTables: 8516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14953292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199004 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2727516 kB' 'DirectMap2M: 19212288 kB' 'DirectMap1G: 47185920 kB' 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.317 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36526136 kB' 'MemAvailable: 41213084 kB' 'Buffers: 2696 kB' 'Cached: 18427292 kB' 'SwapCached: 0 kB' 'Active: 14430600 kB' 'Inactive: 4470784 kB' 'Active(anon): 13841440 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474560 kB' 'Mapped: 213832 kB' 'Shmem: 13370044 kB' 'KReclaimable: 241256 kB' 'Slab: 632000 kB' 'SReclaimable: 241256 kB' 'SUnreclaim: 390744 kB' 'KernelStack: 12976 kB' 'PageTables: 8500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14953308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198972 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2727516 kB' 'DirectMap2M: 19212288 kB' 'DirectMap1G: 47185920 kB' 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.318 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.319 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36527048 kB' 'MemAvailable: 41213996 kB' 'Buffers: 2696 kB' 'Cached: 18427300 kB' 'SwapCached: 0 kB' 'Active: 14430596 kB' 'Inactive: 4470784 kB' 'Active(anon): 13841436 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474564 kB' 'Mapped: 213832 kB' 'Shmem: 13370052 kB' 'KReclaimable: 241256 kB' 'Slab: 632000 kB' 'SReclaimable: 241256 kB' 'SUnreclaim: 390744 kB' 'KernelStack: 12960 kB' 'PageTables: 8408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14953332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198972 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2727516 kB' 'DirectMap2M: 19212288 kB' 'DirectMap1G: 47185920 kB' 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.320 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.321 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:52.322 nr_hugepages=1536 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:52.322 resv_hugepages=0 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:52.322 surplus_hugepages=0 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:52.322 anon_hugepages=0 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36527108 kB' 'MemAvailable: 41214056 kB' 'Buffers: 2696 kB' 'Cached: 18427312 kB' 'SwapCached: 0 kB' 'Active: 14429888 kB' 'Inactive: 4470784 kB' 'Active(anon): 13840728 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473836 kB' 'Mapped: 213832 kB' 'Shmem: 13370064 kB' 'KReclaimable: 241256 kB' 'Slab: 632024 kB' 'SReclaimable: 241256 kB' 'SUnreclaim: 390768 kB' 'KernelStack: 12928 kB' 'PageTables: 8320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14953352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198972 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2727516 kB' 'DirectMap2M: 19212288 kB' 'DirectMap1G: 47185920 kB' 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.322 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.323 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21937400 kB' 'MemUsed: 10892484 kB' 'SwapCached: 0 kB' 'Active: 8407764 kB' 'Inactive: 187456 kB' 'Active(anon): 8011608 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 187456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8358576 kB' 'Mapped: 101740 kB' 'AnonPages: 239804 kB' 'Shmem: 7774964 kB' 'KernelStack: 6648 kB' 'PageTables: 3988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118044 kB' 'Slab: 330908 kB' 'SReclaimable: 118044 kB' 'SUnreclaim: 212864 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.324 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 14589344 kB' 'MemUsed: 13122500 kB' 'SwapCached: 0 kB' 'Active: 6022348 kB' 'Inactive: 4283328 kB' 'Active(anon): 5829344 kB' 'Inactive(anon): 0 kB' 'Active(file): 193004 kB' 'Inactive(file): 4283328 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10071476 kB' 'Mapped: 112092 kB' 'AnonPages: 234224 kB' 'Shmem: 5595144 kB' 'KernelStack: 6264 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 123212 kB' 'Slab: 301116 kB' 'SReclaimable: 123212 kB' 'SUnreclaim: 177904 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.325 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.326 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.327 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.327 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.327 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.327 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:52.327 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.327 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.327 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.327 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.327 00:50:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:52.327 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:52.327 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:52.327 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:52.327 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:52.327 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:52.327 node0=512 expecting 512 00:02:52.327 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:52.327 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:52.327 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:52.327 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:52.327 node1=1024 expecting 1024 00:02:52.327 00:50:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:52.327 00:02:52.327 real 0m1.671s 00:02:52.327 user 0m0.690s 00:02:52.327 sys 0m0.949s 00:02:52.327 00:50:04 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:52.327 00:50:04 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:52.327 ************************************ 00:02:52.327 END TEST custom_alloc 00:02:52.327 ************************************ 00:02:52.327 00:50:04 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:52.327 00:50:04 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:52.327 00:50:04 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:52.327 00:50:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:52.327 ************************************ 00:02:52.327 START TEST no_shrink_alloc 00:02:52.327 ************************************ 00:02:52.327 00:50:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:02:52.327 00:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:52.327 00:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:52.327 00:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:52.327 00:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:02:52.327 00:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:52.327 00:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:52.327 00:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:52.327 00:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:52.327 00:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:52.327 00:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:52.327 00:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:52.327 00:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:52.327 00:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:52.327 00:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:52.327 00:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:52.327 00:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:52.327 00:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:52.327 00:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:52.327 00:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:52.327 00:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:02:52.327 00:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:52.327 00:50:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:53.704 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:53.704 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:53.704 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:53.704 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:53.704 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:53.704 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:53.704 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:53.704 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:53.704 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:53.704 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:53.704 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:53.704 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:53.704 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:53.704 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:53.704 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:53.704 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:53.704 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:53.704 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:53.704 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:53.704 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:53.704 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37438044 kB' 'MemAvailable: 42124992 kB' 'Buffers: 2696 kB' 'Cached: 18427420 kB' 'SwapCached: 0 kB' 'Active: 14430660 kB' 'Inactive: 4470784 kB' 'Active(anon): 13841500 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474504 kB' 'Mapped: 213864 kB' 'Shmem: 13370172 kB' 'KReclaimable: 241256 kB' 'Slab: 631908 kB' 'SReclaimable: 241256 kB' 'SUnreclaim: 390652 kB' 'KernelStack: 12896 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14953424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198956 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2727516 kB' 'DirectMap2M: 19212288 kB' 'DirectMap1G: 47185920 kB' 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.705 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37438452 kB' 'MemAvailable: 42125400 kB' 'Buffers: 2696 kB' 'Cached: 18427420 kB' 'SwapCached: 0 kB' 'Active: 14430632 kB' 'Inactive: 4470784 kB' 'Active(anon): 13841472 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474536 kB' 'Mapped: 213920 kB' 'Shmem: 13370172 kB' 'KReclaimable: 241256 kB' 'Slab: 631984 kB' 'SReclaimable: 241256 kB' 'SUnreclaim: 390728 kB' 'KernelStack: 12928 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14953440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198924 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2727516 kB' 'DirectMap2M: 19212288 kB' 'DirectMap1G: 47185920 kB' 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.706 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.707 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.708 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.969 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.969 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.969 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.969 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.969 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.969 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.969 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.969 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.969 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.969 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.969 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:53.969 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:53.969 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:53.969 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:53.969 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:53.969 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:53.969 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.969 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.969 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37439276 kB' 'MemAvailable: 42126224 kB' 'Buffers: 2696 kB' 'Cached: 18427440 kB' 'SwapCached: 0 kB' 'Active: 14430464 kB' 'Inactive: 4470784 kB' 'Active(anon): 13841304 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474288 kB' 'Mapped: 213844 kB' 'Shmem: 13370192 kB' 'KReclaimable: 241256 kB' 'Slab: 631988 kB' 'SReclaimable: 241256 kB' 'SUnreclaim: 390732 kB' 'KernelStack: 12896 kB' 'PageTables: 8200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14953464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198924 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2727516 kB' 'DirectMap2M: 19212288 kB' 'DirectMap1G: 47185920 kB' 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.970 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:53.971 nr_hugepages=1024 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:53.971 resv_hugepages=0 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:53.971 surplus_hugepages=0 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:53.971 anon_hugepages=0 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.971 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37439024 kB' 'MemAvailable: 42125972 kB' 'Buffers: 2696 kB' 'Cached: 18427464 kB' 'SwapCached: 0 kB' 'Active: 14430520 kB' 'Inactive: 4470784 kB' 'Active(anon): 13841360 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474360 kB' 'Mapped: 213844 kB' 'Shmem: 13370216 kB' 'KReclaimable: 241256 kB' 'Slab: 631988 kB' 'SReclaimable: 241256 kB' 'SUnreclaim: 390732 kB' 'KernelStack: 12928 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14953484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198924 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2727516 kB' 'DirectMap2M: 19212288 kB' 'DirectMap1G: 47185920 kB' 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.972 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20880744 kB' 'MemUsed: 11949140 kB' 'SwapCached: 0 kB' 'Active: 8407228 kB' 'Inactive: 187456 kB' 'Active(anon): 8011072 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 187456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8358628 kB' 'Mapped: 101752 kB' 'AnonPages: 239144 kB' 'Shmem: 7775016 kB' 'KernelStack: 6616 kB' 'PageTables: 3936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118044 kB' 'Slab: 330828 kB' 'SReclaimable: 118044 kB' 'SUnreclaim: 212784 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.973 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.974 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:53.975 node0=1024 expecting 1024 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.975 00:50:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:55.352 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:55.352 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:55.352 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:55.352 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:55.352 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:55.352 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:55.352 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:55.352 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:55.352 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:55.352 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:55.352 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:55.352 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:55.352 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:55.352 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:55.352 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:55.352 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:55.352 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:55.352 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:55.352 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:02:55.352 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:55.352 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:55.352 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:55.352 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:55.352 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:55.352 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:55.352 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:55.352 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:55.352 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:55.352 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:55.352 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:55.352 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:55.352 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.352 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.352 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37463128 kB' 'MemAvailable: 42150076 kB' 'Buffers: 2696 kB' 'Cached: 18427536 kB' 'SwapCached: 0 kB' 'Active: 14430996 kB' 'Inactive: 4470784 kB' 'Active(anon): 13841836 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474748 kB' 'Mapped: 213868 kB' 'Shmem: 13370288 kB' 'KReclaimable: 241256 kB' 'Slab: 632008 kB' 'SReclaimable: 241256 kB' 'SUnreclaim: 390752 kB' 'KernelStack: 12912 kB' 'PageTables: 8244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14953792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198892 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2727516 kB' 'DirectMap2M: 19212288 kB' 'DirectMap1G: 47185920 kB' 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.353 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.354 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37466832 kB' 'MemAvailable: 42153780 kB' 'Buffers: 2696 kB' 'Cached: 18427540 kB' 'SwapCached: 0 kB' 'Active: 14430848 kB' 'Inactive: 4470784 kB' 'Active(anon): 13841688 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474696 kB' 'Mapped: 213928 kB' 'Shmem: 13370292 kB' 'KReclaimable: 241256 kB' 'Slab: 632024 kB' 'SReclaimable: 241256 kB' 'SUnreclaim: 390768 kB' 'KernelStack: 12928 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14953812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198860 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2727516 kB' 'DirectMap2M: 19212288 kB' 'DirectMap1G: 47185920 kB' 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.355 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.356 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.357 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37467392 kB' 'MemAvailable: 42154340 kB' 'Buffers: 2696 kB' 'Cached: 18427556 kB' 'SwapCached: 0 kB' 'Active: 14430748 kB' 'Inactive: 4470784 kB' 'Active(anon): 13841588 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474524 kB' 'Mapped: 213852 kB' 'Shmem: 13370308 kB' 'KReclaimable: 241256 kB' 'Slab: 632048 kB' 'SReclaimable: 241256 kB' 'SUnreclaim: 390792 kB' 'KernelStack: 12944 kB' 'PageTables: 8288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14953832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198860 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2727516 kB' 'DirectMap2M: 19212288 kB' 'DirectMap1G: 47185920 kB' 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.358 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.359 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:55.360 nr_hugepages=1024 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:55.360 resv_hugepages=0 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:55.360 surplus_hugepages=0 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:55.360 anon_hugepages=0 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.360 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37468076 kB' 'MemAvailable: 42155024 kB' 'Buffers: 2696 kB' 'Cached: 18427580 kB' 'SwapCached: 0 kB' 'Active: 14431056 kB' 'Inactive: 4470784 kB' 'Active(anon): 13841896 kB' 'Inactive(anon): 0 kB' 'Active(file): 589160 kB' 'Inactive(file): 4470784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474836 kB' 'Mapped: 213852 kB' 'Shmem: 13370332 kB' 'KReclaimable: 241256 kB' 'Slab: 632048 kB' 'SReclaimable: 241256 kB' 'SUnreclaim: 390792 kB' 'KernelStack: 12960 kB' 'PageTables: 8348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14956272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198844 kB' 'VmallocChunk: 0 kB' 'Percpu: 38208 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2727516 kB' 'DirectMap2M: 19212288 kB' 'DirectMap1G: 47185920 kB' 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.361 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.362 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.363 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.363 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.363 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20888144 kB' 'MemUsed: 11941740 kB' 'SwapCached: 0 kB' 'Active: 8407900 kB' 'Inactive: 187456 kB' 'Active(anon): 8011744 kB' 'Inactive(anon): 0 kB' 'Active(file): 396156 kB' 'Inactive(file): 187456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8358628 kB' 'Mapped: 101760 kB' 'AnonPages: 239812 kB' 'Shmem: 7775016 kB' 'KernelStack: 6808 kB' 'PageTables: 5196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118044 kB' 'Slab: 330692 kB' 'SReclaimable: 118044 kB' 'SUnreclaim: 212648 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.622 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:55.623 node0=1024 expecting 1024 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:55.623 00:02:55.623 real 0m3.100s 00:02:55.623 user 0m1.286s 00:02:55.623 sys 0m1.750s 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:55.623 00:50:07 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:55.623 ************************************ 00:02:55.623 END TEST no_shrink_alloc 00:02:55.623 ************************************ 00:02:55.623 00:50:07 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:02:55.623 00:50:07 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:55.623 00:50:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:55.623 00:50:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:55.623 00:50:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:55.623 00:50:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:55.623 00:50:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:55.623 00:50:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:55.623 00:50:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:55.623 00:50:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:55.623 00:50:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:55.623 00:50:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:55.623 00:50:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:55.624 00:50:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:55.624 00:02:55.624 real 0m12.808s 00:02:55.624 user 0m4.937s 00:02:55.624 sys 0m6.669s 00:02:55.624 00:50:07 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:55.624 00:50:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:55.624 ************************************ 00:02:55.624 END TEST hugepages 00:02:55.624 ************************************ 00:02:55.624 00:50:07 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:55.624 00:50:07 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:55.624 00:50:07 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:55.624 00:50:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:55.624 ************************************ 00:02:55.624 START TEST driver 00:02:55.624 ************************************ 00:02:55.624 00:50:07 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:55.624 * Looking for test storage... 00:02:55.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:55.624 00:50:07 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:02:55.624 00:50:07 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:55.624 00:50:07 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:58.150 00:50:10 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:02:58.150 00:50:10 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:58.150 00:50:10 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:58.150 00:50:10 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:58.150 ************************************ 00:02:58.150 START TEST guess_driver 00:02:58.150 ************************************ 00:02:58.150 00:50:10 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:02:58.150 00:50:10 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:02:58.150 00:50:10 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:02:58.150 00:50:10 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:02:58.150 00:50:10 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:02:58.150 00:50:10 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:02:58.150 00:50:10 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:02:58.150 00:50:10 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:02:58.150 00:50:10 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:02:58.150 00:50:10 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:02:58.150 00:50:10 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 189 > 0 )) 00:02:58.150 00:50:10 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:02:58.150 00:50:10 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:02:58.150 00:50:10 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:02:58.150 00:50:10 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:02:58.150 00:50:10 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:02:58.150 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:58.150 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:58.150 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:58.150 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:58.150 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:02:58.150 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:02:58.150 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:02:58.150 00:50:10 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:02:58.150 00:50:10 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:02:58.150 00:50:10 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:02:58.150 00:50:10 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:02:58.150 00:50:10 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:02:58.150 Looking for driver=vfio-pci 00:02:58.150 00:50:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:58.150 00:50:10 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:02:58.150 00:50:10 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:02:58.150 00:50:10 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:59.525 00:50:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.462 00:50:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.462 00:50:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.462 00:50:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.462 00:50:12 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:00.462 00:50:12 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:00.462 00:50:12 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:00.462 00:50:12 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:02.989 00:03:02.989 real 0m4.996s 00:03:02.989 user 0m1.187s 00:03:02.989 sys 0m1.935s 00:03:02.989 00:50:15 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:02.989 00:50:15 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:02.989 ************************************ 00:03:02.989 END TEST guess_driver 00:03:02.989 ************************************ 00:03:02.989 00:03:02.989 real 0m7.506s 00:03:02.989 user 0m1.787s 00:03:02.989 sys 0m2.991s 00:03:02.989 00:50:15 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:02.989 00:50:15 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:02.989 ************************************ 00:03:02.989 END TEST driver 00:03:02.989 ************************************ 00:03:03.247 00:50:15 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:03.247 00:50:15 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:03.247 00:50:15 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:03.247 00:50:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:03.247 ************************************ 00:03:03.247 START TEST devices 00:03:03.247 ************************************ 00:03:03.247 00:50:15 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:03.247 * Looking for test storage... 00:03:03.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:03.247 00:50:15 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:03.247 00:50:15 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:03.247 00:50:15 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:03.247 00:50:15 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:04.720 00:50:17 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:04.720 00:50:17 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:04.720 00:50:17 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:04.720 00:50:17 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:04.720 00:50:17 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:04.720 00:50:17 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:04.720 00:50:17 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:04.720 00:50:17 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:04.720 00:50:17 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:04.720 00:50:17 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:04.720 00:50:17 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:04.720 00:50:17 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:04.720 00:50:17 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:04.720 00:50:17 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:04.720 00:50:17 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:04.720 00:50:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:04.720 00:50:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:04.720 00:50:17 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:03:04.720 00:50:17 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:04.720 00:50:17 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:04.720 00:50:17 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:04.720 00:50:17 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:04.720 No valid GPT data, bailing 00:03:04.720 00:50:17 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:04.978 00:50:17 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:04.978 00:50:17 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:04.978 00:50:17 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:04.978 00:50:17 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:04.978 00:50:17 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:04.978 00:50:17 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:04.978 00:50:17 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:04.978 00:50:17 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:04.978 00:50:17 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:03:04.978 00:50:17 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:04.978 00:50:17 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:04.978 00:50:17 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:04.978 00:50:17 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:04.978 00:50:17 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:04.978 00:50:17 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:04.978 ************************************ 00:03:04.978 START TEST nvme_mount 00:03:04.978 ************************************ 00:03:04.978 00:50:17 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:03:04.978 00:50:17 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:04.978 00:50:17 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:04.978 00:50:17 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:04.978 00:50:17 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:04.978 00:50:17 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:04.978 00:50:17 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:04.978 00:50:17 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:04.978 00:50:17 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:04.978 00:50:17 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:04.978 00:50:17 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:04.978 00:50:17 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:04.978 00:50:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:04.978 00:50:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:04.978 00:50:17 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:04.978 00:50:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:04.978 00:50:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:04.978 00:50:17 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:04.978 00:50:17 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:04.978 00:50:17 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:05.913 Creating new GPT entries in memory. 00:03:05.914 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:05.914 other utilities. 00:03:05.914 00:50:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:05.914 00:50:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:05.914 00:50:18 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:05.914 00:50:18 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:05.914 00:50:18 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:06.848 Creating new GPT entries in memory. 00:03:06.848 The operation has completed successfully. 00:03:06.848 00:50:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:06.849 00:50:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:06.849 00:50:19 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1113065 00:03:06.849 00:50:19 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:06.849 00:50:19 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:06.849 00:50:19 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:06.849 00:50:19 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:06.849 00:50:19 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:06.849 00:50:19 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:06.849 00:50:19 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:06.849 00:50:19 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:06.849 00:50:19 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:06.849 00:50:19 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:06.849 00:50:19 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:06.849 00:50:19 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:06.849 00:50:19 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:06.849 00:50:19 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:06.849 00:50:19 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:06.849 00:50:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.849 00:50:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:06.849 00:50:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:06.849 00:50:19 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.849 00:50:19 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.224 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.484 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:08.484 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:08.484 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:08.484 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:08.484 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:08.484 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:08.484 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:08.484 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:08.484 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:08.484 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:08.484 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:08.484 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:08.484 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:08.744 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:08.744 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:08.744 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:08.744 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:08.744 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:08.744 00:50:20 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:08.744 00:50:20 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:08.744 00:50:20 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:08.744 00:50:20 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:08.744 00:50:20 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:08.744 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:08.744 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:08.744 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:08.744 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:08.744 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:08.744 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:08.744 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:08.744 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:08.744 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:08.744 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.744 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:08.744 00:50:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:08.744 00:50:20 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.744 00:50:20 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:10.121 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.121 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:10.121 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:10.121 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.121 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.121 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.121 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.121 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.121 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.121 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.121 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.121 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.121 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.121 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.121 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.121 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.121 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.121 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.121 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.121 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.121 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.121 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.121 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.121 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.121 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.122 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.122 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.122 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.122 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.122 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.122 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.122 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.122 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.122 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.122 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.122 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.122 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:10.122 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:10.122 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:10.122 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:10.122 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:10.122 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:10.122 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:03:10.122 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:10.122 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:10.122 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:10.122 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:10.122 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:10.122 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:10.122 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:10.122 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.122 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:10.122 00:50:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:10.122 00:50:22 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.122 00:50:22 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:11.494 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.494 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:11.494 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:11.494 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.494 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.494 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.494 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.494 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.494 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.494 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.494 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.494 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.494 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.494 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.494 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.494 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.494 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.494 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.495 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.495 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.495 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.495 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.495 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.495 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.495 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.495 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.495 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.495 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.495 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.495 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.495 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.495 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.495 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.495 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.495 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.495 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.753 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:11.753 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:11.753 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:11.753 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:11.753 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:11.753 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:11.753 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:11.753 00:50:23 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:11.753 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:11.753 00:03:11.753 real 0m6.792s 00:03:11.753 user 0m1.736s 00:03:11.753 sys 0m2.677s 00:03:11.753 00:50:23 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:11.753 00:50:23 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:11.753 ************************************ 00:03:11.753 END TEST nvme_mount 00:03:11.753 ************************************ 00:03:11.753 00:50:23 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:11.753 00:50:23 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:11.753 00:50:23 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:11.753 00:50:23 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:11.753 ************************************ 00:03:11.753 START TEST dm_mount 00:03:11.753 ************************************ 00:03:11.753 00:50:23 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:03:11.753 00:50:23 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:11.753 00:50:23 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:11.753 00:50:23 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:11.753 00:50:23 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:11.753 00:50:23 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:11.753 00:50:23 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:11.753 00:50:23 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:11.753 00:50:23 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:11.753 00:50:23 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:11.753 00:50:23 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:11.753 00:50:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:11.753 00:50:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:11.753 00:50:23 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:11.753 00:50:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:11.753 00:50:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:11.753 00:50:23 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:11.753 00:50:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:11.753 00:50:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:11.753 00:50:23 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:11.753 00:50:23 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:11.753 00:50:23 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:12.686 Creating new GPT entries in memory. 00:03:12.686 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:12.686 other utilities. 00:03:12.686 00:50:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:12.686 00:50:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:12.686 00:50:24 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:12.686 00:50:24 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:12.686 00:50:24 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:13.621 Creating new GPT entries in memory. 00:03:13.621 The operation has completed successfully. 00:03:13.621 00:50:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:13.621 00:50:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:13.621 00:50:26 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:13.621 00:50:26 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:13.621 00:50:26 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:14.996 The operation has completed successfully. 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1115745 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.996 00:50:27 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.405 00:50:28 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:17.781 00:50:29 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:17.781 00:50:30 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:17.781 00:50:30 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:17.781 00:50:30 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:17.781 00:50:30 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:17.781 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:17.781 00:50:30 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:17.781 00:50:30 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:17.781 00:03:17.781 real 0m6.068s 00:03:17.781 user 0m1.134s 00:03:17.781 sys 0m1.831s 00:03:17.781 00:50:30 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:17.781 00:50:30 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:17.781 ************************************ 00:03:17.781 END TEST dm_mount 00:03:17.781 ************************************ 00:03:17.781 00:50:30 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:17.781 00:50:30 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:17.781 00:50:30 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:17.781 00:50:30 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:17.781 00:50:30 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:17.781 00:50:30 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:17.781 00:50:30 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:18.074 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:18.074 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:18.074 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:18.074 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:18.074 00:50:30 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:18.074 00:50:30 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:18.074 00:50:30 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:18.074 00:50:30 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:18.074 00:50:30 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:18.074 00:50:30 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:18.074 00:50:30 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:18.074 00:03:18.074 real 0m14.932s 00:03:18.074 user 0m3.601s 00:03:18.074 sys 0m5.619s 00:03:18.074 00:50:30 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:18.074 00:50:30 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:18.074 ************************************ 00:03:18.074 END TEST devices 00:03:18.074 ************************************ 00:03:18.074 00:03:18.074 real 0m46.882s 00:03:18.074 user 0m13.979s 00:03:18.074 sys 0m21.441s 00:03:18.074 00:50:30 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:18.074 00:50:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:18.074 ************************************ 00:03:18.074 END TEST setup.sh 00:03:18.074 ************************************ 00:03:18.074 00:50:30 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:19.449 Hugepages 00:03:19.449 node hugesize free / total 00:03:19.450 node0 1048576kB 0 / 0 00:03:19.450 node0 2048kB 2048 / 2048 00:03:19.450 node1 1048576kB 0 / 0 00:03:19.450 node1 2048kB 0 / 0 00:03:19.450 00:03:19.450 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:19.450 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:19.450 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:19.450 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:19.450 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:19.450 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:19.450 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:19.450 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:19.450 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:19.450 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:19.450 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:19.450 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:19.450 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:19.450 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:19.450 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:19.450 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:19.450 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:19.450 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:19.450 00:50:31 -- spdk/autotest.sh@130 -- # uname -s 00:03:19.450 00:50:31 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:19.450 00:50:31 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:19.450 00:50:31 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:20.822 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:20.822 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:20.822 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:20.822 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:20.822 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:20.822 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:20.822 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:20.822 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:20.822 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:20.822 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:20.822 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:20.822 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:20.822 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:20.822 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:20.822 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:20.822 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:21.754 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:22.011 00:50:34 -- common/autotest_common.sh@1528 -- # sleep 1 00:03:22.943 00:50:35 -- common/autotest_common.sh@1529 -- # bdfs=() 00:03:22.943 00:50:35 -- common/autotest_common.sh@1529 -- # local bdfs 00:03:22.943 00:50:35 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:03:22.943 00:50:35 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:03:22.943 00:50:35 -- common/autotest_common.sh@1509 -- # bdfs=() 00:03:22.943 00:50:35 -- common/autotest_common.sh@1509 -- # local bdfs 00:03:22.943 00:50:35 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:22.943 00:50:35 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:22.943 00:50:35 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:03:22.943 00:50:35 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:03:22.943 00:50:35 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:03:22.943 00:50:35 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:24.314 Waiting for block devices as requested 00:03:24.314 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:24.314 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:24.571 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:24.571 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:24.571 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:24.571 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:24.829 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:24.829 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:24.829 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:24.829 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:25.086 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:25.086 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:25.086 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:25.086 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:25.344 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:25.344 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:25.344 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:25.602 00:50:37 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:03:25.602 00:50:37 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:25.602 00:50:37 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:03:25.602 00:50:37 -- common/autotest_common.sh@1498 -- # grep 0000:88:00.0/nvme/nvme 00:03:25.602 00:50:37 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:25.602 00:50:37 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:25.602 00:50:37 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:25.602 00:50:37 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:03:25.602 00:50:37 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:03:25.602 00:50:37 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:03:25.602 00:50:37 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:03:25.602 00:50:37 -- common/autotest_common.sh@1541 -- # grep oacs 00:03:25.602 00:50:37 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:03:25.602 00:50:37 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:03:25.602 00:50:37 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:03:25.602 00:50:37 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:03:25.602 00:50:37 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:03:25.602 00:50:37 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:03:25.602 00:50:37 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:03:25.602 00:50:37 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:03:25.602 00:50:37 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:03:25.602 00:50:37 -- common/autotest_common.sh@1553 -- # continue 00:03:25.602 00:50:37 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:25.602 00:50:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:25.602 00:50:37 -- common/autotest_common.sh@10 -- # set +x 00:03:25.602 00:50:37 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:25.602 00:50:37 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:25.602 00:50:37 -- common/autotest_common.sh@10 -- # set +x 00:03:25.602 00:50:37 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:26.976 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:26.976 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:26.976 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:26.976 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:26.976 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:26.976 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:26.976 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:26.976 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:26.976 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:26.976 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:26.976 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:26.976 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:26.976 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:26.976 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:26.976 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:26.976 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:27.909 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:27.909 00:50:40 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:27.909 00:50:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:27.909 00:50:40 -- common/autotest_common.sh@10 -- # set +x 00:03:27.909 00:50:40 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:27.909 00:50:40 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:03:27.909 00:50:40 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:03:27.909 00:50:40 -- common/autotest_common.sh@1573 -- # bdfs=() 00:03:27.909 00:50:40 -- common/autotest_common.sh@1573 -- # local bdfs 00:03:27.909 00:50:40 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:03:27.909 00:50:40 -- common/autotest_common.sh@1509 -- # bdfs=() 00:03:27.909 00:50:40 -- common/autotest_common.sh@1509 -- # local bdfs 00:03:27.909 00:50:40 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:27.909 00:50:40 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:27.909 00:50:40 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:03:28.182 00:50:40 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:03:28.182 00:50:40 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:03:28.182 00:50:40 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:03:28.182 00:50:40 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:28.182 00:50:40 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:03:28.182 00:50:40 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:28.182 00:50:40 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:03:28.182 00:50:40 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:88:00.0 00:03:28.182 00:50:40 -- common/autotest_common.sh@1588 -- # [[ -z 0000:88:00.0 ]] 00:03:28.182 00:50:40 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=1121633 00:03:28.182 00:50:40 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:28.182 00:50:40 -- common/autotest_common.sh@1594 -- # waitforlisten 1121633 00:03:28.182 00:50:40 -- common/autotest_common.sh@827 -- # '[' -z 1121633 ']' 00:03:28.182 00:50:40 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:28.182 00:50:40 -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:28.182 00:50:40 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:28.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:28.182 00:50:40 -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:28.182 00:50:40 -- common/autotest_common.sh@10 -- # set +x 00:03:28.182 [2024-05-15 00:50:40.384286] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:03:28.182 [2024-05-15 00:50:40.384369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1121633 ] 00:03:28.182 EAL: No free 2048 kB hugepages reported on node 1 00:03:28.182 [2024-05-15 00:50:40.452226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:28.182 [2024-05-15 00:50:40.562057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:28.440 00:50:40 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:28.440 00:50:40 -- common/autotest_common.sh@860 -- # return 0 00:03:28.440 00:50:40 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:03:28.440 00:50:40 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:03:28.440 00:50:40 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:31.717 nvme0n1 00:03:31.717 00:50:43 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:31.974 [2024-05-15 00:50:44.141606] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:31.974 [2024-05-15 00:50:44.141651] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:31.975 request: 00:03:31.975 { 00:03:31.975 "nvme_ctrlr_name": "nvme0", 00:03:31.975 "password": "test", 00:03:31.975 "method": "bdev_nvme_opal_revert", 00:03:31.975 "req_id": 1 00:03:31.975 } 00:03:31.975 Got JSON-RPC error response 00:03:31.975 response: 00:03:31.975 { 00:03:31.975 "code": -32603, 00:03:31.975 "message": "Internal error" 00:03:31.975 } 00:03:31.975 00:50:44 -- common/autotest_common.sh@1600 -- # true 00:03:31.975 00:50:44 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:03:31.975 00:50:44 -- common/autotest_common.sh@1604 -- # killprocess 1121633 00:03:31.975 00:50:44 -- common/autotest_common.sh@946 -- # '[' -z 1121633 ']' 00:03:31.975 00:50:44 -- common/autotest_common.sh@950 -- # kill -0 1121633 00:03:31.975 00:50:44 -- common/autotest_common.sh@951 -- # uname 00:03:31.975 00:50:44 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:31.975 00:50:44 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1121633 00:03:31.975 00:50:44 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:31.975 00:50:44 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:31.975 00:50:44 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1121633' 00:03:31.975 killing process with pid 1121633 00:03:31.975 00:50:44 -- common/autotest_common.sh@965 -- # kill 1121633 00:03:31.975 00:50:44 -- common/autotest_common.sh@970 -- # wait 1121633 00:03:33.871 00:50:45 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:33.871 00:50:45 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:33.871 00:50:45 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:33.871 00:50:45 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:33.871 00:50:45 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:33.871 00:50:45 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:33.871 00:50:45 -- common/autotest_common.sh@10 -- # set +x 00:03:33.871 00:50:45 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:33.871 00:50:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:33.871 00:50:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:33.871 00:50:45 -- common/autotest_common.sh@10 -- # set +x 00:03:33.871 ************************************ 00:03:33.871 START TEST env 00:03:33.871 ************************************ 00:03:33.871 00:50:46 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:33.871 * Looking for test storage... 00:03:33.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:33.871 00:50:46 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:33.871 00:50:46 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:33.871 00:50:46 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:33.871 00:50:46 env -- common/autotest_common.sh@10 -- # set +x 00:03:33.871 ************************************ 00:03:33.871 START TEST env_memory 00:03:33.871 ************************************ 00:03:33.871 00:50:46 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:33.871 00:03:33.871 00:03:33.871 CUnit - A unit testing framework for C - Version 2.1-3 00:03:33.871 http://cunit.sourceforge.net/ 00:03:33.871 00:03:33.871 00:03:33.871 Suite: memory 00:03:33.871 Test: alloc and free memory map ...[2024-05-15 00:50:46.147504] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:33.871 passed 00:03:33.871 Test: mem map translation ...[2024-05-15 00:50:46.167411] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:33.871 [2024-05-15 00:50:46.167431] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:33.871 [2024-05-15 00:50:46.167486] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:33.871 [2024-05-15 00:50:46.167497] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:33.871 passed 00:03:33.871 Test: mem map registration ...[2024-05-15 00:50:46.208268] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:33.871 [2024-05-15 00:50:46.208288] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:33.871 passed 00:03:34.135 Test: mem map adjacent registrations ...passed 00:03:34.135 00:03:34.135 Run Summary: Type Total Ran Passed Failed Inactive 00:03:34.135 suites 1 1 n/a 0 0 00:03:34.135 tests 4 4 4 0 0 00:03:34.135 asserts 152 152 152 0 n/a 00:03:34.135 00:03:34.135 Elapsed time = 0.140 seconds 00:03:34.135 00:03:34.135 real 0m0.148s 00:03:34.135 user 0m0.140s 00:03:34.135 sys 0m0.008s 00:03:34.135 00:50:46 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:34.135 00:50:46 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:34.135 ************************************ 00:03:34.135 END TEST env_memory 00:03:34.135 ************************************ 00:03:34.135 00:50:46 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:34.135 00:50:46 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:34.135 00:50:46 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:34.135 00:50:46 env -- common/autotest_common.sh@10 -- # set +x 00:03:34.135 ************************************ 00:03:34.135 START TEST env_vtophys 00:03:34.135 ************************************ 00:03:34.135 00:50:46 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:34.135 EAL: lib.eal log level changed from notice to debug 00:03:34.135 EAL: Detected lcore 0 as core 0 on socket 0 00:03:34.135 EAL: Detected lcore 1 as core 1 on socket 0 00:03:34.135 EAL: Detected lcore 2 as core 2 on socket 0 00:03:34.135 EAL: Detected lcore 3 as core 3 on socket 0 00:03:34.135 EAL: Detected lcore 4 as core 4 on socket 0 00:03:34.135 EAL: Detected lcore 5 as core 5 on socket 0 00:03:34.135 EAL: Detected lcore 6 as core 8 on socket 0 00:03:34.135 EAL: Detected lcore 7 as core 9 on socket 0 00:03:34.135 EAL: Detected lcore 8 as core 10 on socket 0 00:03:34.135 EAL: Detected lcore 9 as core 11 on socket 0 00:03:34.135 EAL: Detected lcore 10 as core 12 on socket 0 00:03:34.135 EAL: Detected lcore 11 as core 13 on socket 0 00:03:34.135 EAL: Detected lcore 12 as core 0 on socket 1 00:03:34.135 EAL: Detected lcore 13 as core 1 on socket 1 00:03:34.135 EAL: Detected lcore 14 as core 2 on socket 1 00:03:34.135 EAL: Detected lcore 15 as core 3 on socket 1 00:03:34.135 EAL: Detected lcore 16 as core 4 on socket 1 00:03:34.135 EAL: Detected lcore 17 as core 5 on socket 1 00:03:34.135 EAL: Detected lcore 18 as core 8 on socket 1 00:03:34.135 EAL: Detected lcore 19 as core 9 on socket 1 00:03:34.135 EAL: Detected lcore 20 as core 10 on socket 1 00:03:34.135 EAL: Detected lcore 21 as core 11 on socket 1 00:03:34.135 EAL: Detected lcore 22 as core 12 on socket 1 00:03:34.135 EAL: Detected lcore 23 as core 13 on socket 1 00:03:34.135 EAL: Detected lcore 24 as core 0 on socket 0 00:03:34.135 EAL: Detected lcore 25 as core 1 on socket 0 00:03:34.135 EAL: Detected lcore 26 as core 2 on socket 0 00:03:34.135 EAL: Detected lcore 27 as core 3 on socket 0 00:03:34.135 EAL: Detected lcore 28 as core 4 on socket 0 00:03:34.135 EAL: Detected lcore 29 as core 5 on socket 0 00:03:34.135 EAL: Detected lcore 30 as core 8 on socket 0 00:03:34.135 EAL: Detected lcore 31 as core 9 on socket 0 00:03:34.135 EAL: Detected lcore 32 as core 10 on socket 0 00:03:34.135 EAL: Detected lcore 33 as core 11 on socket 0 00:03:34.135 EAL: Detected lcore 34 as core 12 on socket 0 00:03:34.135 EAL: Detected lcore 35 as core 13 on socket 0 00:03:34.135 EAL: Detected lcore 36 as core 0 on socket 1 00:03:34.135 EAL: Detected lcore 37 as core 1 on socket 1 00:03:34.135 EAL: Detected lcore 38 as core 2 on socket 1 00:03:34.135 EAL: Detected lcore 39 as core 3 on socket 1 00:03:34.135 EAL: Detected lcore 40 as core 4 on socket 1 00:03:34.135 EAL: Detected lcore 41 as core 5 on socket 1 00:03:34.135 EAL: Detected lcore 42 as core 8 on socket 1 00:03:34.135 EAL: Detected lcore 43 as core 9 on socket 1 00:03:34.135 EAL: Detected lcore 44 as core 10 on socket 1 00:03:34.135 EAL: Detected lcore 45 as core 11 on socket 1 00:03:34.135 EAL: Detected lcore 46 as core 12 on socket 1 00:03:34.135 EAL: Detected lcore 47 as core 13 on socket 1 00:03:34.135 EAL: Maximum logical cores by configuration: 128 00:03:34.135 EAL: Detected CPU lcores: 48 00:03:34.135 EAL: Detected NUMA nodes: 2 00:03:34.135 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:34.135 EAL: Detected shared linkage of DPDK 00:03:34.135 EAL: No shared files mode enabled, IPC will be disabled 00:03:34.135 EAL: Bus pci wants IOVA as 'DC' 00:03:34.135 EAL: Buses did not request a specific IOVA mode. 00:03:34.135 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:34.135 EAL: Selected IOVA mode 'VA' 00:03:34.135 EAL: No free 2048 kB hugepages reported on node 1 00:03:34.135 EAL: Probing VFIO support... 00:03:34.135 EAL: IOMMU type 1 (Type 1) is supported 00:03:34.135 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:34.135 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:34.135 EAL: VFIO support initialized 00:03:34.135 EAL: Ask a virtual area of 0x2e000 bytes 00:03:34.135 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:34.135 EAL: Setting up physically contiguous memory... 00:03:34.135 EAL: Setting maximum number of open files to 524288 00:03:34.135 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:34.135 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:34.135 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:34.135 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.135 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:34.135 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:34.135 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.135 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:34.135 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:34.135 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.135 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:34.135 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:34.135 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.135 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:34.135 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:34.135 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.135 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:34.135 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:34.135 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.135 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:34.135 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:34.135 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.135 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:34.135 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:34.135 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.135 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:34.135 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:34.135 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:34.135 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.135 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:34.135 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:34.135 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.135 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:34.135 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:34.135 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.135 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:34.135 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:34.135 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.135 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:34.135 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:34.135 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.135 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:34.135 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:34.135 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.135 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:34.135 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:34.135 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.135 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:34.135 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:34.135 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.135 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:34.135 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:34.135 EAL: Hugepages will be freed exactly as allocated. 00:03:34.135 EAL: No shared files mode enabled, IPC is disabled 00:03:34.135 EAL: No shared files mode enabled, IPC is disabled 00:03:34.135 EAL: TSC frequency is ~2700000 KHz 00:03:34.135 EAL: Main lcore 0 is ready (tid=7fa3c7cd9a00;cpuset=[0]) 00:03:34.135 EAL: Trying to obtain current memory policy. 00:03:34.135 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.135 EAL: Restoring previous memory policy: 0 00:03:34.135 EAL: request: mp_malloc_sync 00:03:34.135 EAL: No shared files mode enabled, IPC is disabled 00:03:34.135 EAL: Heap on socket 0 was expanded by 2MB 00:03:34.135 EAL: No shared files mode enabled, IPC is disabled 00:03:34.135 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:34.136 EAL: Mem event callback 'spdk:(nil)' registered 00:03:34.136 00:03:34.136 00:03:34.136 CUnit - A unit testing framework for C - Version 2.1-3 00:03:34.136 http://cunit.sourceforge.net/ 00:03:34.136 00:03:34.136 00:03:34.136 Suite: components_suite 00:03:34.136 Test: vtophys_malloc_test ...passed 00:03:34.136 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:34.136 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.136 EAL: Restoring previous memory policy: 4 00:03:34.136 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.136 EAL: request: mp_malloc_sync 00:03:34.136 EAL: No shared files mode enabled, IPC is disabled 00:03:34.136 EAL: Heap on socket 0 was expanded by 4MB 00:03:34.136 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.136 EAL: request: mp_malloc_sync 00:03:34.136 EAL: No shared files mode enabled, IPC is disabled 00:03:34.136 EAL: Heap on socket 0 was shrunk by 4MB 00:03:34.136 EAL: Trying to obtain current memory policy. 00:03:34.136 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.136 EAL: Restoring previous memory policy: 4 00:03:34.136 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.136 EAL: request: mp_malloc_sync 00:03:34.136 EAL: No shared files mode enabled, IPC is disabled 00:03:34.136 EAL: Heap on socket 0 was expanded by 6MB 00:03:34.136 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.136 EAL: request: mp_malloc_sync 00:03:34.136 EAL: No shared files mode enabled, IPC is disabled 00:03:34.136 EAL: Heap on socket 0 was shrunk by 6MB 00:03:34.136 EAL: Trying to obtain current memory policy. 00:03:34.136 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.136 EAL: Restoring previous memory policy: 4 00:03:34.136 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.136 EAL: request: mp_malloc_sync 00:03:34.136 EAL: No shared files mode enabled, IPC is disabled 00:03:34.136 EAL: Heap on socket 0 was expanded by 10MB 00:03:34.136 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.136 EAL: request: mp_malloc_sync 00:03:34.136 EAL: No shared files mode enabled, IPC is disabled 00:03:34.136 EAL: Heap on socket 0 was shrunk by 10MB 00:03:34.136 EAL: Trying to obtain current memory policy. 00:03:34.136 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.136 EAL: Restoring previous memory policy: 4 00:03:34.136 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.136 EAL: request: mp_malloc_sync 00:03:34.136 EAL: No shared files mode enabled, IPC is disabled 00:03:34.136 EAL: Heap on socket 0 was expanded by 18MB 00:03:34.136 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.136 EAL: request: mp_malloc_sync 00:03:34.136 EAL: No shared files mode enabled, IPC is disabled 00:03:34.136 EAL: Heap on socket 0 was shrunk by 18MB 00:03:34.136 EAL: Trying to obtain current memory policy. 00:03:34.136 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.136 EAL: Restoring previous memory policy: 4 00:03:34.136 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.136 EAL: request: mp_malloc_sync 00:03:34.136 EAL: No shared files mode enabled, IPC is disabled 00:03:34.136 EAL: Heap on socket 0 was expanded by 34MB 00:03:34.136 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.136 EAL: request: mp_malloc_sync 00:03:34.136 EAL: No shared files mode enabled, IPC is disabled 00:03:34.136 EAL: Heap on socket 0 was shrunk by 34MB 00:03:34.136 EAL: Trying to obtain current memory policy. 00:03:34.136 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.136 EAL: Restoring previous memory policy: 4 00:03:34.136 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.136 EAL: request: mp_malloc_sync 00:03:34.136 EAL: No shared files mode enabled, IPC is disabled 00:03:34.136 EAL: Heap on socket 0 was expanded by 66MB 00:03:34.136 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.136 EAL: request: mp_malloc_sync 00:03:34.136 EAL: No shared files mode enabled, IPC is disabled 00:03:34.136 EAL: Heap on socket 0 was shrunk by 66MB 00:03:34.136 EAL: Trying to obtain current memory policy. 00:03:34.136 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.136 EAL: Restoring previous memory policy: 4 00:03:34.136 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.136 EAL: request: mp_malloc_sync 00:03:34.136 EAL: No shared files mode enabled, IPC is disabled 00:03:34.136 EAL: Heap on socket 0 was expanded by 130MB 00:03:34.394 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.394 EAL: request: mp_malloc_sync 00:03:34.394 EAL: No shared files mode enabled, IPC is disabled 00:03:34.394 EAL: Heap on socket 0 was shrunk by 130MB 00:03:34.394 EAL: Trying to obtain current memory policy. 00:03:34.394 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.394 EAL: Restoring previous memory policy: 4 00:03:34.394 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.394 EAL: request: mp_malloc_sync 00:03:34.394 EAL: No shared files mode enabled, IPC is disabled 00:03:34.394 EAL: Heap on socket 0 was expanded by 258MB 00:03:34.394 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.394 EAL: request: mp_malloc_sync 00:03:34.394 EAL: No shared files mode enabled, IPC is disabled 00:03:34.394 EAL: Heap on socket 0 was shrunk by 258MB 00:03:34.394 EAL: Trying to obtain current memory policy. 00:03:34.394 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.652 EAL: Restoring previous memory policy: 4 00:03:34.652 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.652 EAL: request: mp_malloc_sync 00:03:34.652 EAL: No shared files mode enabled, IPC is disabled 00:03:34.652 EAL: Heap on socket 0 was expanded by 514MB 00:03:34.652 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.910 EAL: request: mp_malloc_sync 00:03:34.910 EAL: No shared files mode enabled, IPC is disabled 00:03:34.910 EAL: Heap on socket 0 was shrunk by 514MB 00:03:34.910 EAL: Trying to obtain current memory policy. 00:03:34.910 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:35.167 EAL: Restoring previous memory policy: 4 00:03:35.167 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.167 EAL: request: mp_malloc_sync 00:03:35.167 EAL: No shared files mode enabled, IPC is disabled 00:03:35.167 EAL: Heap on socket 0 was expanded by 1026MB 00:03:35.424 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.682 EAL: request: mp_malloc_sync 00:03:35.682 EAL: No shared files mode enabled, IPC is disabled 00:03:35.682 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:35.682 passed 00:03:35.682 00:03:35.682 Run Summary: Type Total Ran Passed Failed Inactive 00:03:35.682 suites 1 1 n/a 0 0 00:03:35.682 tests 2 2 2 0 0 00:03:35.682 asserts 497 497 497 0 n/a 00:03:35.682 00:03:35.682 Elapsed time = 1.385 seconds 00:03:35.682 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.682 EAL: request: mp_malloc_sync 00:03:35.682 EAL: No shared files mode enabled, IPC is disabled 00:03:35.682 EAL: Heap on socket 0 was shrunk by 2MB 00:03:35.682 EAL: No shared files mode enabled, IPC is disabled 00:03:35.682 EAL: No shared files mode enabled, IPC is disabled 00:03:35.682 EAL: No shared files mode enabled, IPC is disabled 00:03:35.682 00:03:35.682 real 0m1.517s 00:03:35.682 user 0m0.879s 00:03:35.682 sys 0m0.604s 00:03:35.682 00:50:47 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:35.682 00:50:47 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:35.682 ************************************ 00:03:35.682 END TEST env_vtophys 00:03:35.682 ************************************ 00:03:35.682 00:50:47 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:35.682 00:50:47 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:35.682 00:50:47 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:35.682 00:50:47 env -- common/autotest_common.sh@10 -- # set +x 00:03:35.682 ************************************ 00:03:35.682 START TEST env_pci 00:03:35.682 ************************************ 00:03:35.682 00:50:47 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:35.682 00:03:35.682 00:03:35.682 CUnit - A unit testing framework for C - Version 2.1-3 00:03:35.682 http://cunit.sourceforge.net/ 00:03:35.682 00:03:35.682 00:03:35.682 Suite: pci 00:03:35.682 Test: pci_hook ...[2024-05-15 00:50:47.895762] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1122526 has claimed it 00:03:35.682 EAL: Cannot find device (10000:00:01.0) 00:03:35.682 EAL: Failed to attach device on primary process 00:03:35.682 passed 00:03:35.682 00:03:35.682 Run Summary: Type Total Ran Passed Failed Inactive 00:03:35.682 suites 1 1 n/a 0 0 00:03:35.682 tests 1 1 1 0 0 00:03:35.682 asserts 25 25 25 0 n/a 00:03:35.682 00:03:35.682 Elapsed time = 0.027 seconds 00:03:35.682 00:03:35.682 real 0m0.040s 00:03:35.682 user 0m0.010s 00:03:35.682 sys 0m0.030s 00:03:35.682 00:50:47 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:35.682 00:50:47 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:35.682 ************************************ 00:03:35.682 END TEST env_pci 00:03:35.682 ************************************ 00:03:35.682 00:50:47 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:35.682 00:50:47 env -- env/env.sh@15 -- # uname 00:03:35.682 00:50:47 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:35.682 00:50:47 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:35.682 00:50:47 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:35.682 00:50:47 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:03:35.682 00:50:47 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:35.682 00:50:47 env -- common/autotest_common.sh@10 -- # set +x 00:03:35.682 ************************************ 00:03:35.682 START TEST env_dpdk_post_init 00:03:35.682 ************************************ 00:03:35.682 00:50:47 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:35.682 EAL: Detected CPU lcores: 48 00:03:35.682 EAL: Detected NUMA nodes: 2 00:03:35.682 EAL: Detected shared linkage of DPDK 00:03:35.682 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:35.682 EAL: Selected IOVA mode 'VA' 00:03:35.682 EAL: No free 2048 kB hugepages reported on node 1 00:03:35.682 EAL: VFIO support initialized 00:03:35.682 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:35.940 EAL: Using IOMMU type 1 (Type 1) 00:03:35.940 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:35.940 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:35.940 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:35.940 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:35.940 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:35.940 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:35.940 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:35.940 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:35.940 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:35.940 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:35.940 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:35.940 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:35.940 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:35.940 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:35.940 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:35.941 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:36.874 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:03:40.154 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:03:40.154 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:03:40.154 Starting DPDK initialization... 00:03:40.154 Starting SPDK post initialization... 00:03:40.154 SPDK NVMe probe 00:03:40.154 Attaching to 0000:88:00.0 00:03:40.154 Attached to 0000:88:00.0 00:03:40.154 Cleaning up... 00:03:40.154 00:03:40.154 real 0m4.466s 00:03:40.154 user 0m3.306s 00:03:40.154 sys 0m0.217s 00:03:40.154 00:50:52 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:40.154 00:50:52 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:40.154 ************************************ 00:03:40.154 END TEST env_dpdk_post_init 00:03:40.154 ************************************ 00:03:40.154 00:50:52 env -- env/env.sh@26 -- # uname 00:03:40.154 00:50:52 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:40.154 00:50:52 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:40.154 00:50:52 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:40.154 00:50:52 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:40.154 00:50:52 env -- common/autotest_common.sh@10 -- # set +x 00:03:40.154 ************************************ 00:03:40.154 START TEST env_mem_callbacks 00:03:40.154 ************************************ 00:03:40.154 00:50:52 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:40.154 EAL: Detected CPU lcores: 48 00:03:40.154 EAL: Detected NUMA nodes: 2 00:03:40.154 EAL: Detected shared linkage of DPDK 00:03:40.154 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:40.411 EAL: Selected IOVA mode 'VA' 00:03:40.411 EAL: No free 2048 kB hugepages reported on node 1 00:03:40.411 EAL: VFIO support initialized 00:03:40.411 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:40.411 00:03:40.411 00:03:40.411 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.411 http://cunit.sourceforge.net/ 00:03:40.411 00:03:40.411 00:03:40.411 Suite: memory 00:03:40.411 Test: test ... 00:03:40.411 register 0x200000200000 2097152 00:03:40.411 malloc 3145728 00:03:40.411 register 0x200000400000 4194304 00:03:40.411 buf 0x200000500000 len 3145728 PASSED 00:03:40.411 malloc 64 00:03:40.411 buf 0x2000004fff40 len 64 PASSED 00:03:40.411 malloc 4194304 00:03:40.411 register 0x200000800000 6291456 00:03:40.411 buf 0x200000a00000 len 4194304 PASSED 00:03:40.411 free 0x200000500000 3145728 00:03:40.411 free 0x2000004fff40 64 00:03:40.411 unregister 0x200000400000 4194304 PASSED 00:03:40.411 free 0x200000a00000 4194304 00:03:40.411 unregister 0x200000800000 6291456 PASSED 00:03:40.411 malloc 8388608 00:03:40.411 register 0x200000400000 10485760 00:03:40.411 buf 0x200000600000 len 8388608 PASSED 00:03:40.411 free 0x200000600000 8388608 00:03:40.412 unregister 0x200000400000 10485760 PASSED 00:03:40.412 passed 00:03:40.412 00:03:40.412 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.412 suites 1 1 n/a 0 0 00:03:40.412 tests 1 1 1 0 0 00:03:40.412 asserts 15 15 15 0 n/a 00:03:40.412 00:03:40.412 Elapsed time = 0.005 seconds 00:03:40.412 00:03:40.412 real 0m0.055s 00:03:40.412 user 0m0.013s 00:03:40.412 sys 0m0.042s 00:03:40.412 00:50:52 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:40.412 00:50:52 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:40.412 ************************************ 00:03:40.412 END TEST env_mem_callbacks 00:03:40.412 ************************************ 00:03:40.412 00:03:40.412 real 0m6.551s 00:03:40.412 user 0m4.470s 00:03:40.412 sys 0m1.108s 00:03:40.412 00:50:52 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:40.412 00:50:52 env -- common/autotest_common.sh@10 -- # set +x 00:03:40.412 ************************************ 00:03:40.412 END TEST env 00:03:40.412 ************************************ 00:03:40.412 00:50:52 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:40.412 00:50:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:40.412 00:50:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:40.412 00:50:52 -- common/autotest_common.sh@10 -- # set +x 00:03:40.412 ************************************ 00:03:40.412 START TEST rpc 00:03:40.412 ************************************ 00:03:40.412 00:50:52 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:40.412 * Looking for test storage... 00:03:40.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:40.412 00:50:52 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1123307 00:03:40.412 00:50:52 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:40.412 00:50:52 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:40.412 00:50:52 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1123307 00:03:40.412 00:50:52 rpc -- common/autotest_common.sh@827 -- # '[' -z 1123307 ']' 00:03:40.412 00:50:52 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:40.412 00:50:52 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:40.412 00:50:52 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:40.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:40.412 00:50:52 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:40.412 00:50:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.412 [2024-05-15 00:50:52.726815] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:03:40.412 [2024-05-15 00:50:52.726906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1123307 ] 00:03:40.412 EAL: No free 2048 kB hugepages reported on node 1 00:03:40.412 [2024-05-15 00:50:52.793375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:40.670 [2024-05-15 00:50:52.901617] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:40.670 [2024-05-15 00:50:52.901669] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1123307' to capture a snapshot of events at runtime. 00:03:40.670 [2024-05-15 00:50:52.901682] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:40.670 [2024-05-15 00:50:52.901693] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:40.670 [2024-05-15 00:50:52.901703] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1123307 for offline analysis/debug. 00:03:40.670 [2024-05-15 00:50:52.901729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:40.928 00:50:53 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:40.928 00:50:53 rpc -- common/autotest_common.sh@860 -- # return 0 00:03:40.928 00:50:53 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:40.928 00:50:53 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:40.928 00:50:53 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:40.928 00:50:53 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:40.928 00:50:53 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:40.928 00:50:53 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:40.928 00:50:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.928 ************************************ 00:03:40.928 START TEST rpc_integrity 00:03:40.928 ************************************ 00:03:40.928 00:50:53 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:03:40.928 00:50:53 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:40.928 00:50:53 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:40.928 00:50:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.928 00:50:53 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:40.928 00:50:53 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:40.928 00:50:53 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:40.928 00:50:53 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:40.928 00:50:53 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:40.928 00:50:53 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:40.928 00:50:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.928 00:50:53 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:40.928 00:50:53 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:40.928 00:50:53 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:40.928 00:50:53 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:40.928 00:50:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.928 00:50:53 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:40.928 00:50:53 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:40.928 { 00:03:40.928 "name": "Malloc0", 00:03:40.928 "aliases": [ 00:03:40.928 "eb0882b8-bfd4-487f-9f2f-71fb94a990d7" 00:03:40.928 ], 00:03:40.928 "product_name": "Malloc disk", 00:03:40.928 "block_size": 512, 00:03:40.928 "num_blocks": 16384, 00:03:40.928 "uuid": "eb0882b8-bfd4-487f-9f2f-71fb94a990d7", 00:03:40.928 "assigned_rate_limits": { 00:03:40.928 "rw_ios_per_sec": 0, 00:03:40.928 "rw_mbytes_per_sec": 0, 00:03:40.928 "r_mbytes_per_sec": 0, 00:03:40.928 "w_mbytes_per_sec": 0 00:03:40.928 }, 00:03:40.928 "claimed": false, 00:03:40.928 "zoned": false, 00:03:40.928 "supported_io_types": { 00:03:40.928 "read": true, 00:03:40.928 "write": true, 00:03:40.928 "unmap": true, 00:03:40.928 "write_zeroes": true, 00:03:40.928 "flush": true, 00:03:40.928 "reset": true, 00:03:40.928 "compare": false, 00:03:40.928 "compare_and_write": false, 00:03:40.928 "abort": true, 00:03:40.928 "nvme_admin": false, 00:03:40.928 "nvme_io": false 00:03:40.928 }, 00:03:40.928 "memory_domains": [ 00:03:40.928 { 00:03:40.928 "dma_device_id": "system", 00:03:40.928 "dma_device_type": 1 00:03:40.928 }, 00:03:40.928 { 00:03:40.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:40.928 "dma_device_type": 2 00:03:40.928 } 00:03:40.928 ], 00:03:40.928 "driver_specific": {} 00:03:40.928 } 00:03:40.928 ]' 00:03:40.928 00:50:53 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:40.928 00:50:53 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:40.928 00:50:53 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:40.928 00:50:53 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:40.928 00:50:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:40.928 [2024-05-15 00:50:53.309350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:40.928 [2024-05-15 00:50:53.309402] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:40.928 [2024-05-15 00:50:53.309426] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14a7b50 00:03:40.928 [2024-05-15 00:50:53.309442] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:40.928 [2024-05-15 00:50:53.310940] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:40.928 [2024-05-15 00:50:53.310969] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:40.928 Passthru0 00:03:40.928 00:50:53 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:40.928 00:50:53 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:40.928 00:50:53 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:40.928 00:50:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.186 00:50:53 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:41.186 00:50:53 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:41.186 { 00:03:41.186 "name": "Malloc0", 00:03:41.186 "aliases": [ 00:03:41.186 "eb0882b8-bfd4-487f-9f2f-71fb94a990d7" 00:03:41.186 ], 00:03:41.186 "product_name": "Malloc disk", 00:03:41.186 "block_size": 512, 00:03:41.186 "num_blocks": 16384, 00:03:41.186 "uuid": "eb0882b8-bfd4-487f-9f2f-71fb94a990d7", 00:03:41.186 "assigned_rate_limits": { 00:03:41.186 "rw_ios_per_sec": 0, 00:03:41.186 "rw_mbytes_per_sec": 0, 00:03:41.186 "r_mbytes_per_sec": 0, 00:03:41.186 "w_mbytes_per_sec": 0 00:03:41.186 }, 00:03:41.186 "claimed": true, 00:03:41.186 "claim_type": "exclusive_write", 00:03:41.186 "zoned": false, 00:03:41.186 "supported_io_types": { 00:03:41.186 "read": true, 00:03:41.186 "write": true, 00:03:41.186 "unmap": true, 00:03:41.186 "write_zeroes": true, 00:03:41.186 "flush": true, 00:03:41.186 "reset": true, 00:03:41.186 "compare": false, 00:03:41.186 "compare_and_write": false, 00:03:41.186 "abort": true, 00:03:41.186 "nvme_admin": false, 00:03:41.186 "nvme_io": false 00:03:41.186 }, 00:03:41.186 "memory_domains": [ 00:03:41.186 { 00:03:41.186 "dma_device_id": "system", 00:03:41.186 "dma_device_type": 1 00:03:41.186 }, 00:03:41.186 { 00:03:41.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.186 "dma_device_type": 2 00:03:41.186 } 00:03:41.186 ], 00:03:41.186 "driver_specific": {} 00:03:41.186 }, 00:03:41.186 { 00:03:41.186 "name": "Passthru0", 00:03:41.186 "aliases": [ 00:03:41.186 "5608ea42-aa6c-5a0b-b9c7-2ae62576211d" 00:03:41.186 ], 00:03:41.186 "product_name": "passthru", 00:03:41.186 "block_size": 512, 00:03:41.186 "num_blocks": 16384, 00:03:41.186 "uuid": "5608ea42-aa6c-5a0b-b9c7-2ae62576211d", 00:03:41.186 "assigned_rate_limits": { 00:03:41.186 "rw_ios_per_sec": 0, 00:03:41.186 "rw_mbytes_per_sec": 0, 00:03:41.186 "r_mbytes_per_sec": 0, 00:03:41.186 "w_mbytes_per_sec": 0 00:03:41.186 }, 00:03:41.186 "claimed": false, 00:03:41.186 "zoned": false, 00:03:41.186 "supported_io_types": { 00:03:41.186 "read": true, 00:03:41.186 "write": true, 00:03:41.186 "unmap": true, 00:03:41.186 "write_zeroes": true, 00:03:41.186 "flush": true, 00:03:41.186 "reset": true, 00:03:41.186 "compare": false, 00:03:41.186 "compare_and_write": false, 00:03:41.186 "abort": true, 00:03:41.186 "nvme_admin": false, 00:03:41.186 "nvme_io": false 00:03:41.186 }, 00:03:41.186 "memory_domains": [ 00:03:41.186 { 00:03:41.186 "dma_device_id": "system", 00:03:41.186 "dma_device_type": 1 00:03:41.186 }, 00:03:41.186 { 00:03:41.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.186 "dma_device_type": 2 00:03:41.186 } 00:03:41.186 ], 00:03:41.186 "driver_specific": { 00:03:41.186 "passthru": { 00:03:41.186 "name": "Passthru0", 00:03:41.186 "base_bdev_name": "Malloc0" 00:03:41.186 } 00:03:41.186 } 00:03:41.186 } 00:03:41.186 ]' 00:03:41.186 00:50:53 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:41.186 00:50:53 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:41.186 00:50:53 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:41.187 00:50:53 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:41.187 00:50:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.187 00:50:53 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:41.187 00:50:53 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:41.187 00:50:53 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:41.187 00:50:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.187 00:50:53 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:41.187 00:50:53 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:41.187 00:50:53 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:41.187 00:50:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.187 00:50:53 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:41.187 00:50:53 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:41.187 00:50:53 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:41.187 00:50:53 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:41.187 00:03:41.187 real 0m0.230s 00:03:41.187 user 0m0.153s 00:03:41.187 sys 0m0.017s 00:03:41.187 00:50:53 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:41.187 00:50:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.187 ************************************ 00:03:41.187 END TEST rpc_integrity 00:03:41.187 ************************************ 00:03:41.187 00:50:53 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:41.187 00:50:53 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:41.187 00:50:53 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:41.187 00:50:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.187 ************************************ 00:03:41.187 START TEST rpc_plugins 00:03:41.187 ************************************ 00:03:41.187 00:50:53 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:03:41.187 00:50:53 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:41.187 00:50:53 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:41.187 00:50:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:41.187 00:50:53 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:41.187 00:50:53 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:41.187 00:50:53 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:41.187 00:50:53 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:41.187 00:50:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:41.187 00:50:53 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:41.187 00:50:53 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:41.187 { 00:03:41.187 "name": "Malloc1", 00:03:41.187 "aliases": [ 00:03:41.187 "3e6a7ca2-88d1-4b23-8975-3f7fc8075b0b" 00:03:41.187 ], 00:03:41.187 "product_name": "Malloc disk", 00:03:41.187 "block_size": 4096, 00:03:41.187 "num_blocks": 256, 00:03:41.187 "uuid": "3e6a7ca2-88d1-4b23-8975-3f7fc8075b0b", 00:03:41.187 "assigned_rate_limits": { 00:03:41.187 "rw_ios_per_sec": 0, 00:03:41.187 "rw_mbytes_per_sec": 0, 00:03:41.187 "r_mbytes_per_sec": 0, 00:03:41.187 "w_mbytes_per_sec": 0 00:03:41.187 }, 00:03:41.187 "claimed": false, 00:03:41.187 "zoned": false, 00:03:41.187 "supported_io_types": { 00:03:41.187 "read": true, 00:03:41.187 "write": true, 00:03:41.187 "unmap": true, 00:03:41.187 "write_zeroes": true, 00:03:41.187 "flush": true, 00:03:41.187 "reset": true, 00:03:41.187 "compare": false, 00:03:41.187 "compare_and_write": false, 00:03:41.187 "abort": true, 00:03:41.187 "nvme_admin": false, 00:03:41.187 "nvme_io": false 00:03:41.187 }, 00:03:41.187 "memory_domains": [ 00:03:41.187 { 00:03:41.187 "dma_device_id": "system", 00:03:41.187 "dma_device_type": 1 00:03:41.187 }, 00:03:41.187 { 00:03:41.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.187 "dma_device_type": 2 00:03:41.187 } 00:03:41.187 ], 00:03:41.187 "driver_specific": {} 00:03:41.187 } 00:03:41.187 ]' 00:03:41.187 00:50:53 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:41.187 00:50:53 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:41.187 00:50:53 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:41.187 00:50:53 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:41.187 00:50:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:41.187 00:50:53 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:41.187 00:50:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:41.187 00:50:53 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:41.187 00:50:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:41.187 00:50:53 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:41.187 00:50:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:41.187 00:50:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:41.444 00:50:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:41.444 00:03:41.444 real 0m0.113s 00:03:41.444 user 0m0.075s 00:03:41.444 sys 0m0.007s 00:03:41.444 00:50:53 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:41.444 00:50:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:41.444 ************************************ 00:03:41.444 END TEST rpc_plugins 00:03:41.444 ************************************ 00:03:41.444 00:50:53 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:41.444 00:50:53 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:41.444 00:50:53 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:41.444 00:50:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.444 ************************************ 00:03:41.444 START TEST rpc_trace_cmd_test 00:03:41.444 ************************************ 00:03:41.444 00:50:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:03:41.444 00:50:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:41.444 00:50:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:41.444 00:50:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:41.444 00:50:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:41.444 00:50:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:41.444 00:50:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:41.444 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1123307", 00:03:41.444 "tpoint_group_mask": "0x8", 00:03:41.444 "iscsi_conn": { 00:03:41.444 "mask": "0x2", 00:03:41.444 "tpoint_mask": "0x0" 00:03:41.444 }, 00:03:41.444 "scsi": { 00:03:41.444 "mask": "0x4", 00:03:41.444 "tpoint_mask": "0x0" 00:03:41.444 }, 00:03:41.444 "bdev": { 00:03:41.444 "mask": "0x8", 00:03:41.444 "tpoint_mask": "0xffffffffffffffff" 00:03:41.444 }, 00:03:41.444 "nvmf_rdma": { 00:03:41.444 "mask": "0x10", 00:03:41.444 "tpoint_mask": "0x0" 00:03:41.444 }, 00:03:41.444 "nvmf_tcp": { 00:03:41.444 "mask": "0x20", 00:03:41.444 "tpoint_mask": "0x0" 00:03:41.444 }, 00:03:41.444 "ftl": { 00:03:41.444 "mask": "0x40", 00:03:41.444 "tpoint_mask": "0x0" 00:03:41.444 }, 00:03:41.444 "blobfs": { 00:03:41.444 "mask": "0x80", 00:03:41.444 "tpoint_mask": "0x0" 00:03:41.444 }, 00:03:41.444 "dsa": { 00:03:41.444 "mask": "0x200", 00:03:41.444 "tpoint_mask": "0x0" 00:03:41.444 }, 00:03:41.444 "thread": { 00:03:41.444 "mask": "0x400", 00:03:41.444 "tpoint_mask": "0x0" 00:03:41.444 }, 00:03:41.444 "nvme_pcie": { 00:03:41.444 "mask": "0x800", 00:03:41.444 "tpoint_mask": "0x0" 00:03:41.444 }, 00:03:41.444 "iaa": { 00:03:41.444 "mask": "0x1000", 00:03:41.444 "tpoint_mask": "0x0" 00:03:41.444 }, 00:03:41.444 "nvme_tcp": { 00:03:41.444 "mask": "0x2000", 00:03:41.444 "tpoint_mask": "0x0" 00:03:41.444 }, 00:03:41.444 "bdev_nvme": { 00:03:41.444 "mask": "0x4000", 00:03:41.444 "tpoint_mask": "0x0" 00:03:41.444 }, 00:03:41.444 "sock": { 00:03:41.444 "mask": "0x8000", 00:03:41.444 "tpoint_mask": "0x0" 00:03:41.444 } 00:03:41.444 }' 00:03:41.444 00:50:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:41.444 00:50:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:41.444 00:50:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:41.444 00:50:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:41.444 00:50:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:41.444 00:50:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:41.444 00:50:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:41.444 00:50:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:41.444 00:50:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:41.703 00:50:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:41.703 00:03:41.703 real 0m0.194s 00:03:41.703 user 0m0.170s 00:03:41.703 sys 0m0.015s 00:03:41.703 00:50:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:41.703 00:50:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:41.703 ************************************ 00:03:41.703 END TEST rpc_trace_cmd_test 00:03:41.703 ************************************ 00:03:41.703 00:50:53 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:41.703 00:50:53 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:41.703 00:50:53 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:41.703 00:50:53 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:41.703 00:50:53 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:41.703 00:50:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.703 ************************************ 00:03:41.703 START TEST rpc_daemon_integrity 00:03:41.703 ************************************ 00:03:41.703 00:50:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:03:41.703 00:50:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:41.703 00:50:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:41.703 00:50:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.703 00:50:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:41.703 00:50:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:41.703 00:50:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:41.703 00:50:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:41.703 00:50:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:41.703 00:50:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:41.703 00:50:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.703 00:50:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:41.703 00:50:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:41.703 00:50:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:41.703 00:50:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:41.703 00:50:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.703 00:50:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:41.703 00:50:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:41.703 { 00:03:41.703 "name": "Malloc2", 00:03:41.703 "aliases": [ 00:03:41.703 "ade7ae5a-d192-4b9b-a83b-ce8f9b03e646" 00:03:41.703 ], 00:03:41.703 "product_name": "Malloc disk", 00:03:41.703 "block_size": 512, 00:03:41.703 "num_blocks": 16384, 00:03:41.703 "uuid": "ade7ae5a-d192-4b9b-a83b-ce8f9b03e646", 00:03:41.703 "assigned_rate_limits": { 00:03:41.703 "rw_ios_per_sec": 0, 00:03:41.703 "rw_mbytes_per_sec": 0, 00:03:41.703 "r_mbytes_per_sec": 0, 00:03:41.703 "w_mbytes_per_sec": 0 00:03:41.703 }, 00:03:41.703 "claimed": false, 00:03:41.703 "zoned": false, 00:03:41.703 "supported_io_types": { 00:03:41.703 "read": true, 00:03:41.703 "write": true, 00:03:41.703 "unmap": true, 00:03:41.703 "write_zeroes": true, 00:03:41.703 "flush": true, 00:03:41.703 "reset": true, 00:03:41.703 "compare": false, 00:03:41.703 "compare_and_write": false, 00:03:41.703 "abort": true, 00:03:41.703 "nvme_admin": false, 00:03:41.703 "nvme_io": false 00:03:41.703 }, 00:03:41.703 "memory_domains": [ 00:03:41.703 { 00:03:41.703 "dma_device_id": "system", 00:03:41.703 "dma_device_type": 1 00:03:41.703 }, 00:03:41.703 { 00:03:41.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.703 "dma_device_type": 2 00:03:41.703 } 00:03:41.703 ], 00:03:41.703 "driver_specific": {} 00:03:41.703 } 00:03:41.703 ]' 00:03:41.703 00:50:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:41.703 00:50:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:41.703 00:50:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:41.703 00:50:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:41.703 00:50:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.703 [2024-05-15 00:50:53.991301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:41.703 [2024-05-15 00:50:53.991344] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:41.703 [2024-05-15 00:50:53.991364] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14ab260 00:03:41.703 [2024-05-15 00:50:53.991377] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:41.703 [2024-05-15 00:50:53.992554] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:41.703 [2024-05-15 00:50:53.992580] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:41.703 Passthru0 00:03:41.703 00:50:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:41.703 00:50:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:41.703 00:50:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:41.703 00:50:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.703 00:50:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:41.703 00:50:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:41.703 { 00:03:41.703 "name": "Malloc2", 00:03:41.703 "aliases": [ 00:03:41.703 "ade7ae5a-d192-4b9b-a83b-ce8f9b03e646" 00:03:41.703 ], 00:03:41.703 "product_name": "Malloc disk", 00:03:41.703 "block_size": 512, 00:03:41.703 "num_blocks": 16384, 00:03:41.703 "uuid": "ade7ae5a-d192-4b9b-a83b-ce8f9b03e646", 00:03:41.703 "assigned_rate_limits": { 00:03:41.703 "rw_ios_per_sec": 0, 00:03:41.703 "rw_mbytes_per_sec": 0, 00:03:41.703 "r_mbytes_per_sec": 0, 00:03:41.703 "w_mbytes_per_sec": 0 00:03:41.703 }, 00:03:41.703 "claimed": true, 00:03:41.703 "claim_type": "exclusive_write", 00:03:41.703 "zoned": false, 00:03:41.703 "supported_io_types": { 00:03:41.703 "read": true, 00:03:41.703 "write": true, 00:03:41.703 "unmap": true, 00:03:41.703 "write_zeroes": true, 00:03:41.703 "flush": true, 00:03:41.703 "reset": true, 00:03:41.703 "compare": false, 00:03:41.703 "compare_and_write": false, 00:03:41.703 "abort": true, 00:03:41.703 "nvme_admin": false, 00:03:41.703 "nvme_io": false 00:03:41.703 }, 00:03:41.703 "memory_domains": [ 00:03:41.703 { 00:03:41.703 "dma_device_id": "system", 00:03:41.703 "dma_device_type": 1 00:03:41.703 }, 00:03:41.703 { 00:03:41.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.703 "dma_device_type": 2 00:03:41.703 } 00:03:41.703 ], 00:03:41.703 "driver_specific": {} 00:03:41.703 }, 00:03:41.703 { 00:03:41.703 "name": "Passthru0", 00:03:41.703 "aliases": [ 00:03:41.703 "8b23acd5-7916-55a9-beec-4192eef0f32c" 00:03:41.703 ], 00:03:41.703 "product_name": "passthru", 00:03:41.703 "block_size": 512, 00:03:41.703 "num_blocks": 16384, 00:03:41.703 "uuid": "8b23acd5-7916-55a9-beec-4192eef0f32c", 00:03:41.703 "assigned_rate_limits": { 00:03:41.703 "rw_ios_per_sec": 0, 00:03:41.703 "rw_mbytes_per_sec": 0, 00:03:41.703 "r_mbytes_per_sec": 0, 00:03:41.703 "w_mbytes_per_sec": 0 00:03:41.703 }, 00:03:41.703 "claimed": false, 00:03:41.703 "zoned": false, 00:03:41.703 "supported_io_types": { 00:03:41.703 "read": true, 00:03:41.703 "write": true, 00:03:41.703 "unmap": true, 00:03:41.703 "write_zeroes": true, 00:03:41.703 "flush": true, 00:03:41.703 "reset": true, 00:03:41.703 "compare": false, 00:03:41.703 "compare_and_write": false, 00:03:41.703 "abort": true, 00:03:41.703 "nvme_admin": false, 00:03:41.703 "nvme_io": false 00:03:41.703 }, 00:03:41.703 "memory_domains": [ 00:03:41.703 { 00:03:41.703 "dma_device_id": "system", 00:03:41.703 "dma_device_type": 1 00:03:41.703 }, 00:03:41.703 { 00:03:41.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.703 "dma_device_type": 2 00:03:41.703 } 00:03:41.703 ], 00:03:41.703 "driver_specific": { 00:03:41.703 "passthru": { 00:03:41.703 "name": "Passthru0", 00:03:41.703 "base_bdev_name": "Malloc2" 00:03:41.703 } 00:03:41.703 } 00:03:41.703 } 00:03:41.703 ]' 00:03:41.703 00:50:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:41.703 00:50:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:41.703 00:50:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:41.703 00:50:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:41.703 00:50:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.703 00:50:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:41.703 00:50:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:41.703 00:50:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:41.703 00:50:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.703 00:50:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:41.703 00:50:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:41.703 00:50:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:41.703 00:50:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:41.703 00:50:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:41.703 00:50:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:41.703 00:50:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:42.002 00:50:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:42.002 00:03:42.002 real 0m0.221s 00:03:42.002 user 0m0.146s 00:03:42.002 sys 0m0.022s 00:03:42.002 00:50:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:42.002 00:50:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.002 ************************************ 00:03:42.002 END TEST rpc_daemon_integrity 00:03:42.002 ************************************ 00:03:42.002 00:50:54 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:42.002 00:50:54 rpc -- rpc/rpc.sh@84 -- # killprocess 1123307 00:03:42.002 00:50:54 rpc -- common/autotest_common.sh@946 -- # '[' -z 1123307 ']' 00:03:42.002 00:50:54 rpc -- common/autotest_common.sh@950 -- # kill -0 1123307 00:03:42.002 00:50:54 rpc -- common/autotest_common.sh@951 -- # uname 00:03:42.002 00:50:54 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:42.002 00:50:54 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1123307 00:03:42.002 00:50:54 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:42.002 00:50:54 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:42.002 00:50:54 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1123307' 00:03:42.002 killing process with pid 1123307 00:03:42.002 00:50:54 rpc -- common/autotest_common.sh@965 -- # kill 1123307 00:03:42.002 00:50:54 rpc -- common/autotest_common.sh@970 -- # wait 1123307 00:03:42.259 00:03:42.259 real 0m1.998s 00:03:42.259 user 0m2.484s 00:03:42.259 sys 0m0.584s 00:03:42.259 00:50:54 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:42.259 00:50:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.259 ************************************ 00:03:42.259 END TEST rpc 00:03:42.259 ************************************ 00:03:42.518 00:50:54 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:42.518 00:50:54 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:42.518 00:50:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:42.518 00:50:54 -- common/autotest_common.sh@10 -- # set +x 00:03:42.518 ************************************ 00:03:42.518 START TEST skip_rpc 00:03:42.518 ************************************ 00:03:42.518 00:50:54 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:42.518 * Looking for test storage... 00:03:42.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:42.518 00:50:54 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:42.518 00:50:54 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:42.518 00:50:54 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:42.518 00:50:54 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:42.518 00:50:54 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:42.518 00:50:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.518 ************************************ 00:03:42.518 START TEST skip_rpc 00:03:42.518 ************************************ 00:03:42.518 00:50:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:03:42.518 00:50:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1123632 00:03:42.518 00:50:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:42.518 00:50:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:42.518 00:50:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:42.518 [2024-05-15 00:50:54.816792] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:03:42.518 [2024-05-15 00:50:54.816856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1123632 ] 00:03:42.518 EAL: No free 2048 kB hugepages reported on node 1 00:03:42.518 [2024-05-15 00:50:54.890168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:42.776 [2024-05-15 00:50:55.009501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:48.035 00:50:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:48.035 00:50:59 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:03:48.035 00:50:59 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:48.035 00:50:59 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:03:48.035 00:50:59 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:48.035 00:50:59 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:03:48.035 00:50:59 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:48.035 00:50:59 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:03:48.035 00:50:59 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:48.035 00:50:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.035 00:50:59 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:48.035 00:50:59 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:03:48.035 00:50:59 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:48.035 00:50:59 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:48.035 00:50:59 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:48.035 00:50:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:48.035 00:50:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1123632 00:03:48.035 00:50:59 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 1123632 ']' 00:03:48.035 00:50:59 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 1123632 00:03:48.035 00:50:59 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:03:48.035 00:50:59 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:48.035 00:50:59 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1123632 00:03:48.035 00:50:59 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:48.035 00:50:59 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:48.035 00:50:59 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1123632' 00:03:48.035 killing process with pid 1123632 00:03:48.035 00:50:59 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 1123632 00:03:48.035 00:50:59 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 1123632 00:03:48.035 00:03:48.035 real 0m5.482s 00:03:48.035 user 0m5.153s 00:03:48.035 sys 0m0.332s 00:03:48.035 00:51:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:48.035 00:51:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.035 ************************************ 00:03:48.035 END TEST skip_rpc 00:03:48.035 ************************************ 00:03:48.035 00:51:00 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:48.035 00:51:00 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:48.035 00:51:00 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:48.035 00:51:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.035 ************************************ 00:03:48.035 START TEST skip_rpc_with_json 00:03:48.035 ************************************ 00:03:48.035 00:51:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:03:48.035 00:51:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:48.035 00:51:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1124349 00:03:48.035 00:51:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:48.035 00:51:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:48.035 00:51:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1124349 00:03:48.035 00:51:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 1124349 ']' 00:03:48.035 00:51:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:48.035 00:51:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:48.035 00:51:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:48.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:48.035 00:51:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:48.035 00:51:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:48.035 [2024-05-15 00:51:00.357469] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:03:48.035 [2024-05-15 00:51:00.357575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1124349 ] 00:03:48.035 EAL: No free 2048 kB hugepages reported on node 1 00:03:48.292 [2024-05-15 00:51:00.435223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:48.292 [2024-05-15 00:51:00.555442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:49.225 00:51:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:49.225 00:51:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:03:49.225 00:51:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:49.225 00:51:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.225 00:51:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:49.225 [2024-05-15 00:51:01.313787] nvmf_rpc.c:2531:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:49.225 request: 00:03:49.225 { 00:03:49.225 "trtype": "tcp", 00:03:49.225 "method": "nvmf_get_transports", 00:03:49.225 "req_id": 1 00:03:49.225 } 00:03:49.226 Got JSON-RPC error response 00:03:49.226 response: 00:03:49.226 { 00:03:49.226 "code": -19, 00:03:49.226 "message": "No such device" 00:03:49.226 } 00:03:49.226 00:51:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:49.226 00:51:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:49.226 00:51:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.226 00:51:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:49.226 [2024-05-15 00:51:01.325954] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:49.226 00:51:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.226 00:51:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:49.226 00:51:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:49.226 00:51:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:49.226 00:51:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:49.226 00:51:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:49.226 { 00:03:49.226 "subsystems": [ 00:03:49.226 { 00:03:49.226 "subsystem": "vfio_user_target", 00:03:49.226 "config": null 00:03:49.226 }, 00:03:49.226 { 00:03:49.226 "subsystem": "keyring", 00:03:49.226 "config": [] 00:03:49.226 }, 00:03:49.226 { 00:03:49.226 "subsystem": "iobuf", 00:03:49.226 "config": [ 00:03:49.226 { 00:03:49.226 "method": "iobuf_set_options", 00:03:49.226 "params": { 00:03:49.226 "small_pool_count": 8192, 00:03:49.226 "large_pool_count": 1024, 00:03:49.226 "small_bufsize": 8192, 00:03:49.226 "large_bufsize": 135168 00:03:49.226 } 00:03:49.226 } 00:03:49.226 ] 00:03:49.226 }, 00:03:49.226 { 00:03:49.226 "subsystem": "sock", 00:03:49.226 "config": [ 00:03:49.226 { 00:03:49.226 "method": "sock_impl_set_options", 00:03:49.226 "params": { 00:03:49.226 "impl_name": "posix", 00:03:49.226 "recv_buf_size": 2097152, 00:03:49.226 "send_buf_size": 2097152, 00:03:49.226 "enable_recv_pipe": true, 00:03:49.226 "enable_quickack": false, 00:03:49.226 "enable_placement_id": 0, 00:03:49.226 "enable_zerocopy_send_server": true, 00:03:49.226 "enable_zerocopy_send_client": false, 00:03:49.226 "zerocopy_threshold": 0, 00:03:49.226 "tls_version": 0, 00:03:49.226 "enable_ktls": false 00:03:49.226 } 00:03:49.226 }, 00:03:49.226 { 00:03:49.226 "method": "sock_impl_set_options", 00:03:49.226 "params": { 00:03:49.226 "impl_name": "ssl", 00:03:49.226 "recv_buf_size": 4096, 00:03:49.226 "send_buf_size": 4096, 00:03:49.226 "enable_recv_pipe": true, 00:03:49.226 "enable_quickack": false, 00:03:49.226 "enable_placement_id": 0, 00:03:49.226 "enable_zerocopy_send_server": true, 00:03:49.226 "enable_zerocopy_send_client": false, 00:03:49.226 "zerocopy_threshold": 0, 00:03:49.226 "tls_version": 0, 00:03:49.226 "enable_ktls": false 00:03:49.226 } 00:03:49.226 } 00:03:49.226 ] 00:03:49.226 }, 00:03:49.226 { 00:03:49.226 "subsystem": "vmd", 00:03:49.226 "config": [] 00:03:49.226 }, 00:03:49.226 { 00:03:49.226 "subsystem": "accel", 00:03:49.226 "config": [ 00:03:49.226 { 00:03:49.226 "method": "accel_set_options", 00:03:49.226 "params": { 00:03:49.226 "small_cache_size": 128, 00:03:49.226 "large_cache_size": 16, 00:03:49.226 "task_count": 2048, 00:03:49.226 "sequence_count": 2048, 00:03:49.226 "buf_count": 2048 00:03:49.226 } 00:03:49.226 } 00:03:49.226 ] 00:03:49.226 }, 00:03:49.226 { 00:03:49.226 "subsystem": "bdev", 00:03:49.226 "config": [ 00:03:49.226 { 00:03:49.226 "method": "bdev_set_options", 00:03:49.226 "params": { 00:03:49.226 "bdev_io_pool_size": 65535, 00:03:49.226 "bdev_io_cache_size": 256, 00:03:49.226 "bdev_auto_examine": true, 00:03:49.226 "iobuf_small_cache_size": 128, 00:03:49.226 "iobuf_large_cache_size": 16 00:03:49.226 } 00:03:49.226 }, 00:03:49.226 { 00:03:49.226 "method": "bdev_raid_set_options", 00:03:49.226 "params": { 00:03:49.226 "process_window_size_kb": 1024 00:03:49.226 } 00:03:49.226 }, 00:03:49.226 { 00:03:49.226 "method": "bdev_iscsi_set_options", 00:03:49.226 "params": { 00:03:49.226 "timeout_sec": 30 00:03:49.226 } 00:03:49.226 }, 00:03:49.226 { 00:03:49.226 "method": "bdev_nvme_set_options", 00:03:49.226 "params": { 00:03:49.226 "action_on_timeout": "none", 00:03:49.226 "timeout_us": 0, 00:03:49.226 "timeout_admin_us": 0, 00:03:49.226 "keep_alive_timeout_ms": 10000, 00:03:49.226 "arbitration_burst": 0, 00:03:49.226 "low_priority_weight": 0, 00:03:49.226 "medium_priority_weight": 0, 00:03:49.226 "high_priority_weight": 0, 00:03:49.226 "nvme_adminq_poll_period_us": 10000, 00:03:49.226 "nvme_ioq_poll_period_us": 0, 00:03:49.226 "io_queue_requests": 0, 00:03:49.226 "delay_cmd_submit": true, 00:03:49.226 "transport_retry_count": 4, 00:03:49.226 "bdev_retry_count": 3, 00:03:49.226 "transport_ack_timeout": 0, 00:03:49.226 "ctrlr_loss_timeout_sec": 0, 00:03:49.226 "reconnect_delay_sec": 0, 00:03:49.226 "fast_io_fail_timeout_sec": 0, 00:03:49.226 "disable_auto_failback": false, 00:03:49.226 "generate_uuids": false, 00:03:49.226 "transport_tos": 0, 00:03:49.226 "nvme_error_stat": false, 00:03:49.226 "rdma_srq_size": 0, 00:03:49.226 "io_path_stat": false, 00:03:49.226 "allow_accel_sequence": false, 00:03:49.226 "rdma_max_cq_size": 0, 00:03:49.226 "rdma_cm_event_timeout_ms": 0, 00:03:49.226 "dhchap_digests": [ 00:03:49.226 "sha256", 00:03:49.226 "sha384", 00:03:49.226 "sha512" 00:03:49.226 ], 00:03:49.226 "dhchap_dhgroups": [ 00:03:49.226 "null", 00:03:49.226 "ffdhe2048", 00:03:49.226 "ffdhe3072", 00:03:49.226 "ffdhe4096", 00:03:49.226 "ffdhe6144", 00:03:49.226 "ffdhe8192" 00:03:49.226 ] 00:03:49.226 } 00:03:49.226 }, 00:03:49.226 { 00:03:49.226 "method": "bdev_nvme_set_hotplug", 00:03:49.226 "params": { 00:03:49.226 "period_us": 100000, 00:03:49.226 "enable": false 00:03:49.226 } 00:03:49.226 }, 00:03:49.226 { 00:03:49.226 "method": "bdev_wait_for_examine" 00:03:49.226 } 00:03:49.226 ] 00:03:49.226 }, 00:03:49.226 { 00:03:49.226 "subsystem": "scsi", 00:03:49.226 "config": null 00:03:49.226 }, 00:03:49.226 { 00:03:49.226 "subsystem": "scheduler", 00:03:49.226 "config": [ 00:03:49.226 { 00:03:49.226 "method": "framework_set_scheduler", 00:03:49.226 "params": { 00:03:49.226 "name": "static" 00:03:49.226 } 00:03:49.226 } 00:03:49.226 ] 00:03:49.226 }, 00:03:49.226 { 00:03:49.226 "subsystem": "vhost_scsi", 00:03:49.226 "config": [] 00:03:49.226 }, 00:03:49.226 { 00:03:49.226 "subsystem": "vhost_blk", 00:03:49.226 "config": [] 00:03:49.226 }, 00:03:49.226 { 00:03:49.226 "subsystem": "ublk", 00:03:49.226 "config": [] 00:03:49.226 }, 00:03:49.226 { 00:03:49.226 "subsystem": "nbd", 00:03:49.226 "config": [] 00:03:49.226 }, 00:03:49.226 { 00:03:49.226 "subsystem": "nvmf", 00:03:49.226 "config": [ 00:03:49.226 { 00:03:49.226 "method": "nvmf_set_config", 00:03:49.226 "params": { 00:03:49.226 "discovery_filter": "match_any", 00:03:49.226 "admin_cmd_passthru": { 00:03:49.226 "identify_ctrlr": false 00:03:49.226 } 00:03:49.226 } 00:03:49.226 }, 00:03:49.226 { 00:03:49.226 "method": "nvmf_set_max_subsystems", 00:03:49.226 "params": { 00:03:49.226 "max_subsystems": 1024 00:03:49.226 } 00:03:49.226 }, 00:03:49.226 { 00:03:49.226 "method": "nvmf_set_crdt", 00:03:49.226 "params": { 00:03:49.226 "crdt1": 0, 00:03:49.226 "crdt2": 0, 00:03:49.226 "crdt3": 0 00:03:49.226 } 00:03:49.226 }, 00:03:49.226 { 00:03:49.226 "method": "nvmf_create_transport", 00:03:49.226 "params": { 00:03:49.226 "trtype": "TCP", 00:03:49.226 "max_queue_depth": 128, 00:03:49.226 "max_io_qpairs_per_ctrlr": 127, 00:03:49.226 "in_capsule_data_size": 4096, 00:03:49.226 "max_io_size": 131072, 00:03:49.226 "io_unit_size": 131072, 00:03:49.226 "max_aq_depth": 128, 00:03:49.226 "num_shared_buffers": 511, 00:03:49.226 "buf_cache_size": 4294967295, 00:03:49.226 "dif_insert_or_strip": false, 00:03:49.226 "zcopy": false, 00:03:49.226 "c2h_success": true, 00:03:49.226 "sock_priority": 0, 00:03:49.226 "abort_timeout_sec": 1, 00:03:49.226 "ack_timeout": 0, 00:03:49.226 "data_wr_pool_size": 0 00:03:49.226 } 00:03:49.226 } 00:03:49.226 ] 00:03:49.226 }, 00:03:49.226 { 00:03:49.226 "subsystem": "iscsi", 00:03:49.226 "config": [ 00:03:49.226 { 00:03:49.226 "method": "iscsi_set_options", 00:03:49.226 "params": { 00:03:49.226 "node_base": "iqn.2016-06.io.spdk", 00:03:49.226 "max_sessions": 128, 00:03:49.226 "max_connections_per_session": 2, 00:03:49.226 "max_queue_depth": 64, 00:03:49.226 "default_time2wait": 2, 00:03:49.226 "default_time2retain": 20, 00:03:49.226 "first_burst_length": 8192, 00:03:49.226 "immediate_data": true, 00:03:49.227 "allow_duplicated_isid": false, 00:03:49.227 "error_recovery_level": 0, 00:03:49.227 "nop_timeout": 60, 00:03:49.227 "nop_in_interval": 30, 00:03:49.227 "disable_chap": false, 00:03:49.227 "require_chap": false, 00:03:49.227 "mutual_chap": false, 00:03:49.227 "chap_group": 0, 00:03:49.227 "max_large_datain_per_connection": 64, 00:03:49.227 "max_r2t_per_connection": 4, 00:03:49.227 "pdu_pool_size": 36864, 00:03:49.227 "immediate_data_pool_size": 16384, 00:03:49.227 "data_out_pool_size": 2048 00:03:49.227 } 00:03:49.227 } 00:03:49.227 ] 00:03:49.227 } 00:03:49.227 ] 00:03:49.227 } 00:03:49.227 00:51:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:49.227 00:51:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1124349 00:03:49.227 00:51:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 1124349 ']' 00:03:49.227 00:51:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 1124349 00:03:49.227 00:51:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:03:49.227 00:51:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:49.227 00:51:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1124349 00:03:49.227 00:51:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:49.227 00:51:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:49.227 00:51:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1124349' 00:03:49.227 killing process with pid 1124349 00:03:49.227 00:51:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 1124349 00:03:49.227 00:51:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 1124349 00:03:49.791 00:51:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1124700 00:03:49.791 00:51:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:49.791 00:51:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:55.108 00:51:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1124700 00:03:55.108 00:51:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 1124700 ']' 00:03:55.108 00:51:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 1124700 00:03:55.108 00:51:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:03:55.108 00:51:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:55.108 00:51:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1124700 00:03:55.108 00:51:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:55.108 00:51:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:55.108 00:51:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1124700' 00:03:55.108 killing process with pid 1124700 00:03:55.108 00:51:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 1124700 00:03:55.108 00:51:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 1124700 00:03:55.108 00:51:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:55.108 00:51:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:55.108 00:03:55.108 real 0m7.159s 00:03:55.108 user 0m6.900s 00:03:55.108 sys 0m0.785s 00:03:55.108 00:51:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:55.108 00:51:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:55.108 ************************************ 00:03:55.108 END TEST skip_rpc_with_json 00:03:55.108 ************************************ 00:03:55.108 00:51:07 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:55.108 00:51:07 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:55.108 00:51:07 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:55.108 00:51:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.367 ************************************ 00:03:55.367 START TEST skip_rpc_with_delay 00:03:55.367 ************************************ 00:03:55.367 00:51:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:03:55.367 00:51:07 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:55.367 00:51:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:03:55.367 00:51:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:55.367 00:51:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.367 00:51:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:55.367 00:51:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.367 00:51:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:55.367 00:51:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.367 00:51:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:55.367 00:51:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.367 00:51:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:55.367 00:51:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:55.367 [2024-05-15 00:51:07.567228] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:55.367 [2024-05-15 00:51:07.567351] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:03:55.367 00:51:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:03:55.367 00:51:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:55.367 00:51:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:55.367 00:51:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:55.367 00:03:55.367 real 0m0.066s 00:03:55.367 user 0m0.041s 00:03:55.367 sys 0m0.024s 00:03:55.367 00:51:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:55.367 00:51:07 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:55.367 ************************************ 00:03:55.367 END TEST skip_rpc_with_delay 00:03:55.367 ************************************ 00:03:55.367 00:51:07 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:55.367 00:51:07 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:55.367 00:51:07 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:55.367 00:51:07 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:55.367 00:51:07 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:55.367 00:51:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.367 ************************************ 00:03:55.367 START TEST exit_on_failed_rpc_init 00:03:55.367 ************************************ 00:03:55.367 00:51:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:03:55.367 00:51:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1125656 00:03:55.367 00:51:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:55.367 00:51:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1125656 00:03:55.367 00:51:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 1125656 ']' 00:03:55.367 00:51:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:55.367 00:51:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:55.367 00:51:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:55.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:55.367 00:51:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:55.367 00:51:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:55.367 [2024-05-15 00:51:07.688361] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:03:55.367 [2024-05-15 00:51:07.688449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1125656 ] 00:03:55.367 EAL: No free 2048 kB hugepages reported on node 1 00:03:55.625 [2024-05-15 00:51:07.763857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.625 [2024-05-15 00:51:07.875588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.884 00:51:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:55.884 00:51:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:03:55.884 00:51:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:55.884 00:51:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:55.884 00:51:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:03:55.884 00:51:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:55.884 00:51:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.884 00:51:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:55.884 00:51:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.884 00:51:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:55.884 00:51:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.884 00:51:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:55.884 00:51:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.884 00:51:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:55.884 00:51:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:55.884 [2024-05-15 00:51:08.189500] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:03:55.884 [2024-05-15 00:51:08.189584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1125793 ] 00:03:55.884 EAL: No free 2048 kB hugepages reported on node 1 00:03:55.884 [2024-05-15 00:51:08.261405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.143 [2024-05-15 00:51:08.381496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:03:56.143 [2024-05-15 00:51:08.381636] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:56.143 [2024-05-15 00:51:08.381667] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:56.143 [2024-05-15 00:51:08.381681] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:56.143 00:51:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:03:56.143 00:51:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:56.143 00:51:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:03:56.143 00:51:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:03:56.143 00:51:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:03:56.143 00:51:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:56.143 00:51:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:56.143 00:51:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1125656 00:03:56.143 00:51:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 1125656 ']' 00:03:56.143 00:51:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 1125656 00:03:56.143 00:51:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:03:56.143 00:51:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:56.143 00:51:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1125656 00:03:56.400 00:51:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:56.400 00:51:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:56.400 00:51:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1125656' 00:03:56.400 killing process with pid 1125656 00:03:56.400 00:51:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 1125656 00:03:56.400 00:51:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 1125656 00:03:56.659 00:03:56.659 real 0m1.336s 00:03:56.659 user 0m1.492s 00:03:56.659 sys 0m0.471s 00:03:56.659 00:51:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:56.659 00:51:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:56.659 ************************************ 00:03:56.659 END TEST exit_on_failed_rpc_init 00:03:56.659 ************************************ 00:03:56.659 00:51:08 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:56.659 00:03:56.659 real 0m14.315s 00:03:56.659 user 0m13.684s 00:03:56.659 sys 0m1.794s 00:03:56.659 00:51:08 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:56.659 00:51:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.659 ************************************ 00:03:56.659 END TEST skip_rpc 00:03:56.659 ************************************ 00:03:56.659 00:51:09 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:56.659 00:51:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:56.659 00:51:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:56.659 00:51:09 -- common/autotest_common.sh@10 -- # set +x 00:03:56.659 ************************************ 00:03:56.659 START TEST rpc_client 00:03:56.659 ************************************ 00:03:56.659 00:51:09 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:56.918 * Looking for test storage... 00:03:56.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:56.918 00:51:09 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:56.918 OK 00:03:56.918 00:51:09 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:56.918 00:03:56.918 real 0m0.058s 00:03:56.918 user 0m0.023s 00:03:56.918 sys 0m0.039s 00:03:56.918 00:51:09 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:56.918 00:51:09 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:56.918 ************************************ 00:03:56.918 END TEST rpc_client 00:03:56.918 ************************************ 00:03:56.918 00:51:09 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:56.918 00:51:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:56.918 00:51:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:56.918 00:51:09 -- common/autotest_common.sh@10 -- # set +x 00:03:56.918 ************************************ 00:03:56.918 START TEST json_config 00:03:56.918 ************************************ 00:03:56.918 00:51:09 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:56.918 00:51:09 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:56.918 00:51:09 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:56.918 00:51:09 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:56.918 00:51:09 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:56.918 00:51:09 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:56.918 00:51:09 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:56.918 00:51:09 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:56.918 00:51:09 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:56.918 00:51:09 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:56.918 00:51:09 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:56.918 00:51:09 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:56.918 00:51:09 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:56.918 00:51:09 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:56.918 00:51:09 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:56.918 00:51:09 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:56.918 00:51:09 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:56.918 00:51:09 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:56.918 00:51:09 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:56.918 00:51:09 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:56.918 00:51:09 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:56.918 00:51:09 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:56.918 00:51:09 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:56.918 00:51:09 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.918 00:51:09 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.919 00:51:09 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.919 00:51:09 json_config -- paths/export.sh@5 -- # export PATH 00:03:56.919 00:51:09 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.919 00:51:09 json_config -- nvmf/common.sh@47 -- # : 0 00:03:56.919 00:51:09 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:56.919 00:51:09 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:56.919 00:51:09 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:56.919 00:51:09 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:56.919 00:51:09 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:56.919 00:51:09 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:56.919 00:51:09 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:56.919 00:51:09 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:56.919 00:51:09 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:56.919 00:51:09 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:56.919 00:51:09 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:56.919 00:51:09 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:56.919 00:51:09 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:56.919 00:51:09 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:56.919 00:51:09 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:56.919 00:51:09 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:56.919 00:51:09 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:56.919 00:51:09 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:56.919 00:51:09 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:56.919 00:51:09 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:56.919 00:51:09 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:56.919 00:51:09 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:56.919 00:51:09 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:56.919 00:51:09 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:03:56.919 INFO: JSON configuration test init 00:03:56.919 00:51:09 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:03:56.919 00:51:09 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:03:56.919 00:51:09 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:56.919 00:51:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.919 00:51:09 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:03:56.919 00:51:09 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:56.919 00:51:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.919 00:51:09 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:03:56.919 00:51:09 json_config -- json_config/common.sh@9 -- # local app=target 00:03:56.919 00:51:09 json_config -- json_config/common.sh@10 -- # shift 00:03:56.919 00:51:09 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:56.919 00:51:09 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:56.919 00:51:09 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:56.919 00:51:09 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:56.919 00:51:09 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:56.919 00:51:09 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1126177 00:03:56.919 00:51:09 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:56.919 00:51:09 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:56.919 Waiting for target to run... 00:03:56.919 00:51:09 json_config -- json_config/common.sh@25 -- # waitforlisten 1126177 /var/tmp/spdk_tgt.sock 00:03:56.919 00:51:09 json_config -- common/autotest_common.sh@827 -- # '[' -z 1126177 ']' 00:03:56.919 00:51:09 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:56.919 00:51:09 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:56.919 00:51:09 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:56.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:56.919 00:51:09 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:56.919 00:51:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.919 [2024-05-15 00:51:09.249363] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:03:56.919 [2024-05-15 00:51:09.249444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1126177 ] 00:03:56.919 EAL: No free 2048 kB hugepages reported on node 1 00:03:57.486 [2024-05-15 00:51:09.615407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.486 [2024-05-15 00:51:09.707131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.051 00:51:10 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:58.051 00:51:10 json_config -- common/autotest_common.sh@860 -- # return 0 00:03:58.051 00:51:10 json_config -- json_config/common.sh@26 -- # echo '' 00:03:58.051 00:03:58.051 00:51:10 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:03:58.051 00:51:10 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:03:58.051 00:51:10 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:58.051 00:51:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.051 00:51:10 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:03:58.051 00:51:10 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:03:58.051 00:51:10 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:58.051 00:51:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.052 00:51:10 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:58.052 00:51:10 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:03:58.052 00:51:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:01.334 00:51:13 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:01.334 00:51:13 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:01.334 00:51:13 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:01.334 00:51:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.334 00:51:13 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:01.334 00:51:13 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:01.334 00:51:13 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:01.334 00:51:13 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:01.334 00:51:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:01.334 00:51:13 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:01.334 00:51:13 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:01.334 00:51:13 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:01.334 00:51:13 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:01.334 00:51:13 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:01.334 00:51:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:01.334 00:51:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.334 00:51:13 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:01.334 00:51:13 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:01.334 00:51:13 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:01.334 00:51:13 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:01.334 00:51:13 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:01.334 00:51:13 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:01.334 00:51:13 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:01.334 00:51:13 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:01.334 00:51:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.334 00:51:13 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:01.334 00:51:13 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:01.334 00:51:13 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:01.334 00:51:13 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:01.334 00:51:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:01.592 MallocForNvmf0 00:04:01.592 00:51:13 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:01.592 00:51:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:01.850 MallocForNvmf1 00:04:01.850 00:51:14 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:01.850 00:51:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:02.107 [2024-05-15 00:51:14.414334] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:02.107 00:51:14 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:02.107 00:51:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:02.363 00:51:14 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:02.363 00:51:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:02.620 00:51:14 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:02.620 00:51:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:02.878 00:51:15 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:02.878 00:51:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:03.136 [2024-05-15 00:51:15.397085] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:04:03.136 [2024-05-15 00:51:15.397691] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:03.136 00:51:15 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:03.136 00:51:15 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:03.136 00:51:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.136 00:51:15 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:03.136 00:51:15 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:03.136 00:51:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.136 00:51:15 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:03.136 00:51:15 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:03.136 00:51:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:03.393 MallocBdevForConfigChangeCheck 00:04:03.394 00:51:15 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:03.394 00:51:15 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:03.394 00:51:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.394 00:51:15 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:03.394 00:51:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:03.959 00:51:16 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:03.959 INFO: shutting down applications... 00:04:03.960 00:51:16 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:03.960 00:51:16 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:03.960 00:51:16 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:03.960 00:51:16 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:05.333 Calling clear_iscsi_subsystem 00:04:05.333 Calling clear_nvmf_subsystem 00:04:05.333 Calling clear_nbd_subsystem 00:04:05.333 Calling clear_ublk_subsystem 00:04:05.333 Calling clear_vhost_blk_subsystem 00:04:05.333 Calling clear_vhost_scsi_subsystem 00:04:05.333 Calling clear_bdev_subsystem 00:04:05.591 00:51:17 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:05.591 00:51:17 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:05.591 00:51:17 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:05.591 00:51:17 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:05.591 00:51:17 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:05.591 00:51:17 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:05.850 00:51:18 json_config -- json_config/json_config.sh@345 -- # break 00:04:05.850 00:51:18 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:05.850 00:51:18 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:05.850 00:51:18 json_config -- json_config/common.sh@31 -- # local app=target 00:04:05.850 00:51:18 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:05.850 00:51:18 json_config -- json_config/common.sh@35 -- # [[ -n 1126177 ]] 00:04:05.850 00:51:18 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1126177 00:04:05.850 [2024-05-15 00:51:18.113523] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:04:05.850 00:51:18 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:05.850 00:51:18 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:05.850 00:51:18 json_config -- json_config/common.sh@41 -- # kill -0 1126177 00:04:05.850 00:51:18 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:06.415 00:51:18 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:06.415 00:51:18 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:06.415 00:51:18 json_config -- json_config/common.sh@41 -- # kill -0 1126177 00:04:06.415 00:51:18 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:06.415 00:51:18 json_config -- json_config/common.sh@43 -- # break 00:04:06.415 00:51:18 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:06.415 00:51:18 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:06.415 SPDK target shutdown done 00:04:06.415 00:51:18 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:06.415 INFO: relaunching applications... 00:04:06.415 00:51:18 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:06.415 00:51:18 json_config -- json_config/common.sh@9 -- # local app=target 00:04:06.415 00:51:18 json_config -- json_config/common.sh@10 -- # shift 00:04:06.415 00:51:18 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:06.415 00:51:18 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:06.415 00:51:18 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:06.415 00:51:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.415 00:51:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.415 00:51:18 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1127490 00:04:06.415 00:51:18 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:06.415 00:51:18 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:06.415 Waiting for target to run... 00:04:06.415 00:51:18 json_config -- json_config/common.sh@25 -- # waitforlisten 1127490 /var/tmp/spdk_tgt.sock 00:04:06.415 00:51:18 json_config -- common/autotest_common.sh@827 -- # '[' -z 1127490 ']' 00:04:06.415 00:51:18 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:06.415 00:51:18 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:06.415 00:51:18 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:06.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:06.415 00:51:18 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:06.415 00:51:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.415 [2024-05-15 00:51:18.672029] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:04:06.415 [2024-05-15 00:51:18.672132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127490 ] 00:04:06.415 EAL: No free 2048 kB hugepages reported on node 1 00:04:06.981 [2024-05-15 00:51:19.210545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.981 [2024-05-15 00:51:19.318018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.260 [2024-05-15 00:51:22.361980] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:10.260 [2024-05-15 00:51:22.393944] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:04:10.260 [2024-05-15 00:51:22.394453] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:10.857 00:51:23 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:10.857 00:51:23 json_config -- common/autotest_common.sh@860 -- # return 0 00:04:10.857 00:51:23 json_config -- json_config/common.sh@26 -- # echo '' 00:04:10.857 00:04:10.857 00:51:23 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:10.857 00:51:23 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:10.857 INFO: Checking if target configuration is the same... 00:04:10.857 00:51:23 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:10.857 00:51:23 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:10.857 00:51:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:10.857 + '[' 2 -ne 2 ']' 00:04:10.857 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:10.857 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:10.857 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:10.857 +++ basename /dev/fd/62 00:04:10.857 ++ mktemp /tmp/62.XXX 00:04:10.857 + tmp_file_1=/tmp/62.Jah 00:04:10.857 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:10.857 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:10.857 + tmp_file_2=/tmp/spdk_tgt_config.json.TWq 00:04:10.857 + ret=0 00:04:10.857 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:11.115 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:11.371 + diff -u /tmp/62.Jah /tmp/spdk_tgt_config.json.TWq 00:04:11.371 + echo 'INFO: JSON config files are the same' 00:04:11.371 INFO: JSON config files are the same 00:04:11.371 + rm /tmp/62.Jah /tmp/spdk_tgt_config.json.TWq 00:04:11.371 + exit 0 00:04:11.371 00:51:23 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:11.371 00:51:23 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:11.371 INFO: changing configuration and checking if this can be detected... 00:04:11.371 00:51:23 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:11.371 00:51:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:11.372 00:51:23 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:11.372 00:51:23 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:11.372 00:51:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:11.629 + '[' 2 -ne 2 ']' 00:04:11.629 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:11.629 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:11.629 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:11.629 +++ basename /dev/fd/62 00:04:11.629 ++ mktemp /tmp/62.XXX 00:04:11.629 + tmp_file_1=/tmp/62.o2H 00:04:11.629 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:11.629 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:11.629 + tmp_file_2=/tmp/spdk_tgt_config.json.Bkj 00:04:11.629 + ret=0 00:04:11.629 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:11.888 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:11.888 + diff -u /tmp/62.o2H /tmp/spdk_tgt_config.json.Bkj 00:04:11.888 + ret=1 00:04:11.888 + echo '=== Start of file: /tmp/62.o2H ===' 00:04:11.888 + cat /tmp/62.o2H 00:04:11.888 + echo '=== End of file: /tmp/62.o2H ===' 00:04:11.888 + echo '' 00:04:11.888 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Bkj ===' 00:04:11.888 + cat /tmp/spdk_tgt_config.json.Bkj 00:04:11.888 + echo '=== End of file: /tmp/spdk_tgt_config.json.Bkj ===' 00:04:11.888 + echo '' 00:04:11.888 + rm /tmp/62.o2H /tmp/spdk_tgt_config.json.Bkj 00:04:11.888 + exit 1 00:04:11.888 00:51:24 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:11.888 INFO: configuration change detected. 00:04:11.888 00:51:24 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:11.888 00:51:24 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:11.888 00:51:24 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:11.888 00:51:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.888 00:51:24 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:11.888 00:51:24 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:11.888 00:51:24 json_config -- json_config/json_config.sh@317 -- # [[ -n 1127490 ]] 00:04:11.888 00:51:24 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:11.888 00:51:24 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:11.888 00:51:24 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:11.888 00:51:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.888 00:51:24 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:11.888 00:51:24 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:11.888 00:51:24 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:11.888 00:51:24 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:11.888 00:51:24 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:11.888 00:51:24 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:11.888 00:51:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:11.888 00:51:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.888 00:51:24 json_config -- json_config/json_config.sh@323 -- # killprocess 1127490 00:04:11.888 00:51:24 json_config -- common/autotest_common.sh@946 -- # '[' -z 1127490 ']' 00:04:11.888 00:51:24 json_config -- common/autotest_common.sh@950 -- # kill -0 1127490 00:04:11.888 00:51:24 json_config -- common/autotest_common.sh@951 -- # uname 00:04:11.888 00:51:24 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:11.888 00:51:24 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1127490 00:04:11.888 00:51:24 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:11.888 00:51:24 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:11.888 00:51:24 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1127490' 00:04:11.888 killing process with pid 1127490 00:04:11.888 00:51:24 json_config -- common/autotest_common.sh@965 -- # kill 1127490 00:04:11.888 [2024-05-15 00:51:24.256104] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:04:11.888 00:51:24 json_config -- common/autotest_common.sh@970 -- # wait 1127490 00:04:13.787 00:51:25 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:13.787 00:51:25 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:13.787 00:51:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:13.787 00:51:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.787 00:51:25 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:13.788 00:51:25 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:13.788 INFO: Success 00:04:13.788 00:04:13.788 real 0m16.810s 00:04:13.788 user 0m18.796s 00:04:13.788 sys 0m2.102s 00:04:13.788 00:51:25 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:13.788 00:51:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.788 ************************************ 00:04:13.788 END TEST json_config 00:04:13.788 ************************************ 00:04:13.788 00:51:25 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:13.788 00:51:25 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:13.788 00:51:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:13.788 00:51:25 -- common/autotest_common.sh@10 -- # set +x 00:04:13.788 ************************************ 00:04:13.788 START TEST json_config_extra_key 00:04:13.788 ************************************ 00:04:13.788 00:51:26 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:13.788 00:51:26 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:13.788 00:51:26 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:13.788 00:51:26 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:13.788 00:51:26 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:13.788 00:51:26 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:13.788 00:51:26 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:13.788 00:51:26 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:13.788 00:51:26 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:13.788 00:51:26 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:13.788 00:51:26 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:13.788 00:51:26 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:13.788 00:51:26 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:13.788 00:51:26 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:13.788 00:51:26 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:13.788 00:51:26 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:13.788 00:51:26 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:13.788 00:51:26 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:13.788 00:51:26 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:13.788 00:51:26 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:13.788 00:51:26 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:13.788 00:51:26 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:13.788 00:51:26 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:13.788 00:51:26 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.788 00:51:26 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.788 00:51:26 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.788 00:51:26 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:13.788 00:51:26 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.788 00:51:26 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:13.788 00:51:26 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:13.788 00:51:26 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:13.788 00:51:26 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:13.788 00:51:26 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:13.788 00:51:26 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:13.788 00:51:26 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:13.788 00:51:26 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:13.788 00:51:26 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:13.788 00:51:26 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:13.788 00:51:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:13.788 00:51:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:13.788 00:51:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:13.788 00:51:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:13.788 00:51:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:13.788 00:51:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:13.788 00:51:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:13.788 00:51:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:13.788 00:51:26 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:13.788 00:51:26 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:13.788 INFO: launching applications... 00:04:13.788 00:51:26 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:13.788 00:51:26 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:13.788 00:51:26 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:13.788 00:51:26 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:13.788 00:51:26 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:13.788 00:51:26 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:13.788 00:51:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:13.788 00:51:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:13.788 00:51:26 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1128413 00:04:13.788 00:51:26 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:13.788 Waiting for target to run... 00:04:13.788 00:51:26 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1128413 /var/tmp/spdk_tgt.sock 00:04:13.788 00:51:26 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 1128413 ']' 00:04:13.788 00:51:26 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:13.788 00:51:26 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:13.788 00:51:26 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:13.788 00:51:26 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:13.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:13.788 00:51:26 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:13.789 00:51:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:13.789 [2024-05-15 00:51:26.111377] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:04:13.789 [2024-05-15 00:51:26.111460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1128413 ] 00:04:13.789 EAL: No free 2048 kB hugepages reported on node 1 00:04:14.354 [2024-05-15 00:51:26.613508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.354 [2024-05-15 00:51:26.720767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.920 00:51:27 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:14.920 00:51:27 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:04:14.920 00:51:27 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:14.920 00:04:14.920 00:51:27 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:14.920 INFO: shutting down applications... 00:04:14.920 00:51:27 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:14.920 00:51:27 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:14.920 00:51:27 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:14.920 00:51:27 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1128413 ]] 00:04:14.920 00:51:27 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1128413 00:04:14.920 00:51:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:14.920 00:51:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:14.920 00:51:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1128413 00:04:14.920 00:51:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:15.178 00:51:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:15.178 00:51:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:15.178 00:51:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1128413 00:04:15.178 00:51:27 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:15.178 00:51:27 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:15.178 00:51:27 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:15.178 00:51:27 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:15.178 SPDK target shutdown done 00:04:15.178 00:51:27 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:15.178 Success 00:04:15.178 00:04:15.178 real 0m1.545s 00:04:15.178 user 0m1.400s 00:04:15.178 sys 0m0.595s 00:04:15.178 00:51:27 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:15.178 00:51:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:15.178 ************************************ 00:04:15.178 END TEST json_config_extra_key 00:04:15.178 ************************************ 00:04:15.437 00:51:27 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:15.437 00:51:27 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:15.437 00:51:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:15.437 00:51:27 -- common/autotest_common.sh@10 -- # set +x 00:04:15.437 ************************************ 00:04:15.437 START TEST alias_rpc 00:04:15.437 ************************************ 00:04:15.437 00:51:27 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:15.437 * Looking for test storage... 00:04:15.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:15.437 00:51:27 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:15.437 00:51:27 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1128721 00:04:15.437 00:51:27 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:15.437 00:51:27 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1128721 00:04:15.437 00:51:27 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 1128721 ']' 00:04:15.437 00:51:27 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.437 00:51:27 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:15.437 00:51:27 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.437 00:51:27 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:15.437 00:51:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.437 [2024-05-15 00:51:27.707370] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:04:15.437 [2024-05-15 00:51:27.707447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1128721 ] 00:04:15.437 EAL: No free 2048 kB hugepages reported on node 1 00:04:15.437 [2024-05-15 00:51:27.773893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.695 [2024-05-15 00:51:27.882018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.954 00:51:28 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:15.954 00:51:28 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:15.954 00:51:28 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:16.211 00:51:28 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1128721 00:04:16.211 00:51:28 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 1128721 ']' 00:04:16.211 00:51:28 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 1128721 00:04:16.211 00:51:28 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:04:16.211 00:51:28 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:16.211 00:51:28 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1128721 00:04:16.211 00:51:28 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:16.211 00:51:28 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:16.211 00:51:28 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1128721' 00:04:16.211 killing process with pid 1128721 00:04:16.211 00:51:28 alias_rpc -- common/autotest_common.sh@965 -- # kill 1128721 00:04:16.211 00:51:28 alias_rpc -- common/autotest_common.sh@970 -- # wait 1128721 00:04:16.777 00:04:16.777 real 0m1.277s 00:04:16.777 user 0m1.358s 00:04:16.777 sys 0m0.414s 00:04:16.777 00:51:28 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:16.777 00:51:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.777 ************************************ 00:04:16.777 END TEST alias_rpc 00:04:16.777 ************************************ 00:04:16.777 00:51:28 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:04:16.777 00:51:28 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:16.777 00:51:28 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:16.777 00:51:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:16.777 00:51:28 -- common/autotest_common.sh@10 -- # set +x 00:04:16.777 ************************************ 00:04:16.777 START TEST spdkcli_tcp 00:04:16.777 ************************************ 00:04:16.777 00:51:28 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:16.777 * Looking for test storage... 00:04:16.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:16.777 00:51:28 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:16.777 00:51:28 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:16.777 00:51:28 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:16.777 00:51:28 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:16.777 00:51:28 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:16.777 00:51:28 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:16.777 00:51:28 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:16.777 00:51:28 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:16.777 00:51:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:16.777 00:51:28 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1128912 00:04:16.777 00:51:28 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:16.777 00:51:28 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1128912 00:04:16.777 00:51:28 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 1128912 ']' 00:04:16.777 00:51:28 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:16.777 00:51:28 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:16.777 00:51:28 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:16.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:16.777 00:51:28 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:16.777 00:51:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:16.777 [2024-05-15 00:51:29.048836] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:04:16.777 [2024-05-15 00:51:29.048952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1128912 ] 00:04:16.777 EAL: No free 2048 kB hugepages reported on node 1 00:04:16.777 [2024-05-15 00:51:29.117417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:17.036 [2024-05-15 00:51:29.226202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:17.036 [2024-05-15 00:51:29.226208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.602 00:51:29 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:17.602 00:51:29 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:04:17.602 00:51:29 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1129052 00:04:17.602 00:51:29 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:17.602 00:51:29 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:17.860 [ 00:04:17.860 "bdev_malloc_delete", 00:04:17.860 "bdev_malloc_create", 00:04:17.860 "bdev_null_resize", 00:04:17.860 "bdev_null_delete", 00:04:17.860 "bdev_null_create", 00:04:17.860 "bdev_nvme_cuse_unregister", 00:04:17.860 "bdev_nvme_cuse_register", 00:04:17.860 "bdev_opal_new_user", 00:04:17.860 "bdev_opal_set_lock_state", 00:04:17.860 "bdev_opal_delete", 00:04:17.860 "bdev_opal_get_info", 00:04:17.860 "bdev_opal_create", 00:04:17.860 "bdev_nvme_opal_revert", 00:04:17.860 "bdev_nvme_opal_init", 00:04:17.860 "bdev_nvme_send_cmd", 00:04:17.860 "bdev_nvme_get_path_iostat", 00:04:17.860 "bdev_nvme_get_mdns_discovery_info", 00:04:17.860 "bdev_nvme_stop_mdns_discovery", 00:04:17.860 "bdev_nvme_start_mdns_discovery", 00:04:17.860 "bdev_nvme_set_multipath_policy", 00:04:17.860 "bdev_nvme_set_preferred_path", 00:04:17.860 "bdev_nvme_get_io_paths", 00:04:17.860 "bdev_nvme_remove_error_injection", 00:04:17.860 "bdev_nvme_add_error_injection", 00:04:17.860 "bdev_nvme_get_discovery_info", 00:04:17.860 "bdev_nvme_stop_discovery", 00:04:17.860 "bdev_nvme_start_discovery", 00:04:17.860 "bdev_nvme_get_controller_health_info", 00:04:17.860 "bdev_nvme_disable_controller", 00:04:17.860 "bdev_nvme_enable_controller", 00:04:17.860 "bdev_nvme_reset_controller", 00:04:17.860 "bdev_nvme_get_transport_statistics", 00:04:17.860 "bdev_nvme_apply_firmware", 00:04:17.860 "bdev_nvme_detach_controller", 00:04:17.860 "bdev_nvme_get_controllers", 00:04:17.860 "bdev_nvme_attach_controller", 00:04:17.860 "bdev_nvme_set_hotplug", 00:04:17.860 "bdev_nvme_set_options", 00:04:17.860 "bdev_passthru_delete", 00:04:17.860 "bdev_passthru_create", 00:04:17.860 "bdev_lvol_check_shallow_copy", 00:04:17.860 "bdev_lvol_start_shallow_copy", 00:04:17.860 "bdev_lvol_grow_lvstore", 00:04:17.860 "bdev_lvol_get_lvols", 00:04:17.860 "bdev_lvol_get_lvstores", 00:04:17.860 "bdev_lvol_delete", 00:04:17.860 "bdev_lvol_set_read_only", 00:04:17.860 "bdev_lvol_resize", 00:04:17.860 "bdev_lvol_decouple_parent", 00:04:17.860 "bdev_lvol_inflate", 00:04:17.860 "bdev_lvol_rename", 00:04:17.860 "bdev_lvol_clone_bdev", 00:04:17.860 "bdev_lvol_clone", 00:04:17.860 "bdev_lvol_snapshot", 00:04:17.860 "bdev_lvol_create", 00:04:17.860 "bdev_lvol_delete_lvstore", 00:04:17.860 "bdev_lvol_rename_lvstore", 00:04:17.860 "bdev_lvol_create_lvstore", 00:04:17.860 "bdev_raid_set_options", 00:04:17.860 "bdev_raid_remove_base_bdev", 00:04:17.860 "bdev_raid_add_base_bdev", 00:04:17.860 "bdev_raid_delete", 00:04:17.860 "bdev_raid_create", 00:04:17.860 "bdev_raid_get_bdevs", 00:04:17.860 "bdev_error_inject_error", 00:04:17.860 "bdev_error_delete", 00:04:17.860 "bdev_error_create", 00:04:17.860 "bdev_split_delete", 00:04:17.860 "bdev_split_create", 00:04:17.860 "bdev_delay_delete", 00:04:17.860 "bdev_delay_create", 00:04:17.860 "bdev_delay_update_latency", 00:04:17.860 "bdev_zone_block_delete", 00:04:17.860 "bdev_zone_block_create", 00:04:17.860 "blobfs_create", 00:04:17.860 "blobfs_detect", 00:04:17.860 "blobfs_set_cache_size", 00:04:17.860 "bdev_aio_delete", 00:04:17.860 "bdev_aio_rescan", 00:04:17.860 "bdev_aio_create", 00:04:17.860 "bdev_ftl_set_property", 00:04:17.860 "bdev_ftl_get_properties", 00:04:17.860 "bdev_ftl_get_stats", 00:04:17.860 "bdev_ftl_unmap", 00:04:17.860 "bdev_ftl_unload", 00:04:17.860 "bdev_ftl_delete", 00:04:17.860 "bdev_ftl_load", 00:04:17.860 "bdev_ftl_create", 00:04:17.860 "bdev_virtio_attach_controller", 00:04:17.860 "bdev_virtio_scsi_get_devices", 00:04:17.860 "bdev_virtio_detach_controller", 00:04:17.860 "bdev_virtio_blk_set_hotplug", 00:04:17.860 "bdev_iscsi_delete", 00:04:17.860 "bdev_iscsi_create", 00:04:17.860 "bdev_iscsi_set_options", 00:04:17.860 "accel_error_inject_error", 00:04:17.860 "ioat_scan_accel_module", 00:04:17.860 "dsa_scan_accel_module", 00:04:17.860 "iaa_scan_accel_module", 00:04:17.860 "vfu_virtio_create_scsi_endpoint", 00:04:17.860 "vfu_virtio_scsi_remove_target", 00:04:17.860 "vfu_virtio_scsi_add_target", 00:04:17.860 "vfu_virtio_create_blk_endpoint", 00:04:17.860 "vfu_virtio_delete_endpoint", 00:04:17.860 "keyring_file_remove_key", 00:04:17.860 "keyring_file_add_key", 00:04:17.860 "iscsi_get_histogram", 00:04:17.860 "iscsi_enable_histogram", 00:04:17.860 "iscsi_set_options", 00:04:17.860 "iscsi_get_auth_groups", 00:04:17.860 "iscsi_auth_group_remove_secret", 00:04:17.860 "iscsi_auth_group_add_secret", 00:04:17.860 "iscsi_delete_auth_group", 00:04:17.860 "iscsi_create_auth_group", 00:04:17.860 "iscsi_set_discovery_auth", 00:04:17.860 "iscsi_get_options", 00:04:17.860 "iscsi_target_node_request_logout", 00:04:17.860 "iscsi_target_node_set_redirect", 00:04:17.860 "iscsi_target_node_set_auth", 00:04:17.860 "iscsi_target_node_add_lun", 00:04:17.860 "iscsi_get_stats", 00:04:17.860 "iscsi_get_connections", 00:04:17.860 "iscsi_portal_group_set_auth", 00:04:17.860 "iscsi_start_portal_group", 00:04:17.860 "iscsi_delete_portal_group", 00:04:17.860 "iscsi_create_portal_group", 00:04:17.860 "iscsi_get_portal_groups", 00:04:17.860 "iscsi_delete_target_node", 00:04:17.860 "iscsi_target_node_remove_pg_ig_maps", 00:04:17.860 "iscsi_target_node_add_pg_ig_maps", 00:04:17.860 "iscsi_create_target_node", 00:04:17.860 "iscsi_get_target_nodes", 00:04:17.860 "iscsi_delete_initiator_group", 00:04:17.860 "iscsi_initiator_group_remove_initiators", 00:04:17.860 "iscsi_initiator_group_add_initiators", 00:04:17.860 "iscsi_create_initiator_group", 00:04:17.860 "iscsi_get_initiator_groups", 00:04:17.860 "nvmf_set_crdt", 00:04:17.860 "nvmf_set_config", 00:04:17.860 "nvmf_set_max_subsystems", 00:04:17.860 "nvmf_subsystem_get_listeners", 00:04:17.860 "nvmf_subsystem_get_qpairs", 00:04:17.860 "nvmf_subsystem_get_controllers", 00:04:17.860 "nvmf_get_stats", 00:04:17.860 "nvmf_get_transports", 00:04:17.860 "nvmf_create_transport", 00:04:17.860 "nvmf_get_targets", 00:04:17.860 "nvmf_delete_target", 00:04:17.860 "nvmf_create_target", 00:04:17.860 "nvmf_subsystem_allow_any_host", 00:04:17.860 "nvmf_subsystem_remove_host", 00:04:17.860 "nvmf_subsystem_add_host", 00:04:17.860 "nvmf_ns_remove_host", 00:04:17.860 "nvmf_ns_add_host", 00:04:17.860 "nvmf_subsystem_remove_ns", 00:04:17.860 "nvmf_subsystem_add_ns", 00:04:17.861 "nvmf_subsystem_listener_set_ana_state", 00:04:17.861 "nvmf_discovery_get_referrals", 00:04:17.861 "nvmf_discovery_remove_referral", 00:04:17.861 "nvmf_discovery_add_referral", 00:04:17.861 "nvmf_subsystem_remove_listener", 00:04:17.861 "nvmf_subsystem_add_listener", 00:04:17.861 "nvmf_delete_subsystem", 00:04:17.861 "nvmf_create_subsystem", 00:04:17.861 "nvmf_get_subsystems", 00:04:17.861 "env_dpdk_get_mem_stats", 00:04:17.861 "nbd_get_disks", 00:04:17.861 "nbd_stop_disk", 00:04:17.861 "nbd_start_disk", 00:04:17.861 "ublk_recover_disk", 00:04:17.861 "ublk_get_disks", 00:04:17.861 "ublk_stop_disk", 00:04:17.861 "ublk_start_disk", 00:04:17.861 "ublk_destroy_target", 00:04:17.861 "ublk_create_target", 00:04:17.861 "virtio_blk_create_transport", 00:04:17.861 "virtio_blk_get_transports", 00:04:17.861 "vhost_controller_set_coalescing", 00:04:17.861 "vhost_get_controllers", 00:04:17.861 "vhost_delete_controller", 00:04:17.861 "vhost_create_blk_controller", 00:04:17.861 "vhost_scsi_controller_remove_target", 00:04:17.861 "vhost_scsi_controller_add_target", 00:04:17.861 "vhost_start_scsi_controller", 00:04:17.861 "vhost_create_scsi_controller", 00:04:17.861 "thread_set_cpumask", 00:04:17.861 "framework_get_scheduler", 00:04:17.861 "framework_set_scheduler", 00:04:17.861 "framework_get_reactors", 00:04:17.861 "thread_get_io_channels", 00:04:17.861 "thread_get_pollers", 00:04:17.861 "thread_get_stats", 00:04:17.861 "framework_monitor_context_switch", 00:04:17.861 "spdk_kill_instance", 00:04:17.861 "log_enable_timestamps", 00:04:17.861 "log_get_flags", 00:04:17.861 "log_clear_flag", 00:04:17.861 "log_set_flag", 00:04:17.861 "log_get_level", 00:04:17.861 "log_set_level", 00:04:17.861 "log_get_print_level", 00:04:17.861 "log_set_print_level", 00:04:17.861 "framework_enable_cpumask_locks", 00:04:17.861 "framework_disable_cpumask_locks", 00:04:17.861 "framework_wait_init", 00:04:17.861 "framework_start_init", 00:04:17.861 "scsi_get_devices", 00:04:17.861 "bdev_get_histogram", 00:04:17.861 "bdev_enable_histogram", 00:04:17.861 "bdev_set_qos_limit", 00:04:17.861 "bdev_set_qd_sampling_period", 00:04:17.861 "bdev_get_bdevs", 00:04:17.861 "bdev_reset_iostat", 00:04:17.861 "bdev_get_iostat", 00:04:17.861 "bdev_examine", 00:04:17.861 "bdev_wait_for_examine", 00:04:17.861 "bdev_set_options", 00:04:17.861 "notify_get_notifications", 00:04:17.861 "notify_get_types", 00:04:17.861 "accel_get_stats", 00:04:17.861 "accel_set_options", 00:04:17.861 "accel_set_driver", 00:04:17.861 "accel_crypto_key_destroy", 00:04:17.861 "accel_crypto_keys_get", 00:04:17.861 "accel_crypto_key_create", 00:04:17.861 "accel_assign_opc", 00:04:17.861 "accel_get_module_info", 00:04:17.861 "accel_get_opc_assignments", 00:04:17.861 "vmd_rescan", 00:04:17.861 "vmd_remove_device", 00:04:17.861 "vmd_enable", 00:04:17.861 "sock_get_default_impl", 00:04:17.861 "sock_set_default_impl", 00:04:17.861 "sock_impl_set_options", 00:04:17.861 "sock_impl_get_options", 00:04:17.861 "iobuf_get_stats", 00:04:17.861 "iobuf_set_options", 00:04:17.861 "keyring_get_keys", 00:04:17.861 "framework_get_pci_devices", 00:04:17.861 "framework_get_config", 00:04:17.861 "framework_get_subsystems", 00:04:17.861 "vfu_tgt_set_base_path", 00:04:17.861 "trace_get_info", 00:04:17.861 "trace_get_tpoint_group_mask", 00:04:17.861 "trace_disable_tpoint_group", 00:04:17.861 "trace_enable_tpoint_group", 00:04:17.861 "trace_clear_tpoint_mask", 00:04:17.861 "trace_set_tpoint_mask", 00:04:17.861 "spdk_get_version", 00:04:17.861 "rpc_get_methods" 00:04:17.861 ] 00:04:17.861 00:51:30 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:17.861 00:51:30 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:17.861 00:51:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:17.861 00:51:30 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:17.861 00:51:30 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1128912 00:04:17.861 00:51:30 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 1128912 ']' 00:04:17.861 00:51:30 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 1128912 00:04:17.861 00:51:30 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:04:17.861 00:51:30 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:17.861 00:51:30 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1128912 00:04:18.119 00:51:30 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:18.119 00:51:30 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:18.119 00:51:30 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1128912' 00:04:18.119 killing process with pid 1128912 00:04:18.119 00:51:30 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 1128912 00:04:18.119 00:51:30 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 1128912 00:04:18.377 00:04:18.377 real 0m1.795s 00:04:18.377 user 0m3.398s 00:04:18.377 sys 0m0.506s 00:04:18.377 00:51:30 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:18.377 00:51:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:18.377 ************************************ 00:04:18.377 END TEST spdkcli_tcp 00:04:18.377 ************************************ 00:04:18.377 00:51:30 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:18.377 00:51:30 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:18.377 00:51:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:18.377 00:51:30 -- common/autotest_common.sh@10 -- # set +x 00:04:18.635 ************************************ 00:04:18.635 START TEST dpdk_mem_utility 00:04:18.635 ************************************ 00:04:18.635 00:51:30 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:18.635 * Looking for test storage... 00:04:18.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:18.635 00:51:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:18.635 00:51:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1129177 00:04:18.635 00:51:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1129177 00:04:18.635 00:51:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:18.635 00:51:30 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 1129177 ']' 00:04:18.635 00:51:30 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.635 00:51:30 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:18.635 00:51:30 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.635 00:51:30 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:18.635 00:51:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:18.635 [2024-05-15 00:51:30.891607] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:04:18.635 [2024-05-15 00:51:30.891713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1129177 ] 00:04:18.635 EAL: No free 2048 kB hugepages reported on node 1 00:04:18.635 [2024-05-15 00:51:30.963885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.893 [2024-05-15 00:51:31.070391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.458 00:51:31 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:19.458 00:51:31 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:04:19.458 00:51:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:19.458 00:51:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:19.458 00:51:31 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:19.458 00:51:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:19.458 { 00:04:19.458 "filename": "/tmp/spdk_mem_dump.txt" 00:04:19.458 } 00:04:19.458 00:51:31 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:19.458 00:51:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:19.716 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:19.716 1 heaps totaling size 814.000000 MiB 00:04:19.716 size: 814.000000 MiB heap id: 0 00:04:19.716 end heaps---------- 00:04:19.716 8 mempools totaling size 598.116089 MiB 00:04:19.716 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:19.716 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:19.716 size: 84.521057 MiB name: bdev_io_1129177 00:04:19.716 size: 51.011292 MiB name: evtpool_1129177 00:04:19.716 size: 50.003479 MiB name: msgpool_1129177 00:04:19.716 size: 21.763794 MiB name: PDU_Pool 00:04:19.716 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:19.716 size: 0.026123 MiB name: Session_Pool 00:04:19.716 end mempools------- 00:04:19.716 6 memzones totaling size 4.142822 MiB 00:04:19.716 size: 1.000366 MiB name: RG_ring_0_1129177 00:04:19.716 size: 1.000366 MiB name: RG_ring_1_1129177 00:04:19.716 size: 1.000366 MiB name: RG_ring_4_1129177 00:04:19.716 size: 1.000366 MiB name: RG_ring_5_1129177 00:04:19.716 size: 0.125366 MiB name: RG_ring_2_1129177 00:04:19.716 size: 0.015991 MiB name: RG_ring_3_1129177 00:04:19.716 end memzones------- 00:04:19.716 00:51:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:19.716 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:19.716 list of free elements. size: 12.519348 MiB 00:04:19.716 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:19.716 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:19.716 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:19.716 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:19.716 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:19.716 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:19.716 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:19.716 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:19.716 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:19.716 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:19.716 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:19.716 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:19.716 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:19.716 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:19.716 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:19.716 list of standard malloc elements. size: 199.218079 MiB 00:04:19.716 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:19.716 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:19.716 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:19.716 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:19.716 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:19.716 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:19.716 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:19.716 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:19.716 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:19.716 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:19.716 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:19.716 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:19.716 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:19.716 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:19.716 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:19.716 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:19.716 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:19.716 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:19.716 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:19.716 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:19.716 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:19.716 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:19.716 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:19.716 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:19.716 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:19.716 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:19.716 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:19.716 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:19.716 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:19.716 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:19.716 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:19.716 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:19.716 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:19.716 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:19.716 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:19.717 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:19.717 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:19.717 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:19.717 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:19.717 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:19.717 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:19.717 list of memzone associated elements. size: 602.262573 MiB 00:04:19.717 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:19.717 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:19.717 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:19.717 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:19.717 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:19.717 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1129177_0 00:04:19.717 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:19.717 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1129177_0 00:04:19.717 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:19.717 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1129177_0 00:04:19.717 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:19.717 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:19.717 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:19.717 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:19.717 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:19.717 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1129177 00:04:19.717 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:19.717 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1129177 00:04:19.717 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:19.717 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1129177 00:04:19.717 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:19.717 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:19.717 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:19.717 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:19.717 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:19.717 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:19.717 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:19.717 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:19.717 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:19.717 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1129177 00:04:19.717 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:19.717 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1129177 00:04:19.717 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:19.717 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1129177 00:04:19.717 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:19.717 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1129177 00:04:19.717 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:19.717 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1129177 00:04:19.717 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:19.717 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:19.717 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:19.717 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:19.717 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:19.717 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:19.717 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:19.717 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1129177 00:04:19.717 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:19.717 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:19.717 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:19.717 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:19.717 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:19.717 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1129177 00:04:19.717 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:19.717 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:19.717 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:19.717 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1129177 00:04:19.717 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:19.717 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1129177 00:04:19.717 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:19.717 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:19.717 00:51:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:19.717 00:51:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1129177 00:04:19.717 00:51:31 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 1129177 ']' 00:04:19.717 00:51:31 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 1129177 00:04:19.717 00:51:31 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:04:19.717 00:51:31 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:19.717 00:51:31 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1129177 00:04:19.717 00:51:31 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:19.717 00:51:31 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:19.717 00:51:31 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1129177' 00:04:19.717 killing process with pid 1129177 00:04:19.717 00:51:31 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 1129177 00:04:19.717 00:51:31 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 1129177 00:04:20.283 00:04:20.283 real 0m1.641s 00:04:20.283 user 0m1.775s 00:04:20.283 sys 0m0.457s 00:04:20.283 00:51:32 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:20.283 00:51:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:20.283 ************************************ 00:04:20.283 END TEST dpdk_mem_utility 00:04:20.283 ************************************ 00:04:20.283 00:51:32 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:20.283 00:51:32 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:20.283 00:51:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:20.283 00:51:32 -- common/autotest_common.sh@10 -- # set +x 00:04:20.283 ************************************ 00:04:20.283 START TEST event 00:04:20.283 ************************************ 00:04:20.283 00:51:32 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:20.283 * Looking for test storage... 00:04:20.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:20.283 00:51:32 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:20.283 00:51:32 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:20.283 00:51:32 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:20.283 00:51:32 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:04:20.283 00:51:32 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:20.283 00:51:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:20.283 ************************************ 00:04:20.283 START TEST event_perf 00:04:20.283 ************************************ 00:04:20.283 00:51:32 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:20.283 Running I/O for 1 seconds...[2024-05-15 00:51:32.588762] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:04:20.283 [2024-05-15 00:51:32.588829] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1129441 ] 00:04:20.283 EAL: No free 2048 kB hugepages reported on node 1 00:04:20.283 [2024-05-15 00:51:32.660565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:20.542 [2024-05-15 00:51:32.779331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:20.542 [2024-05-15 00:51:32.779386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:20.542 [2024-05-15 00:51:32.779502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:20.542 [2024-05-15 00:51:32.779505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.913 Running I/O for 1 seconds... 00:04:21.913 lcore 0: 235253 00:04:21.913 lcore 1: 235251 00:04:21.913 lcore 2: 235251 00:04:21.913 lcore 3: 235252 00:04:21.913 done. 00:04:21.913 00:04:21.913 real 0m1.330s 00:04:21.913 user 0m4.236s 00:04:21.913 sys 0m0.089s 00:04:21.913 00:51:33 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:21.913 00:51:33 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:21.913 ************************************ 00:04:21.913 END TEST event_perf 00:04:21.913 ************************************ 00:04:21.914 00:51:33 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:21.914 00:51:33 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:04:21.914 00:51:33 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:21.914 00:51:33 event -- common/autotest_common.sh@10 -- # set +x 00:04:21.914 ************************************ 00:04:21.914 START TEST event_reactor 00:04:21.914 ************************************ 00:04:21.914 00:51:33 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:21.914 [2024-05-15 00:51:33.971847] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:04:21.914 [2024-05-15 00:51:33.971912] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1129602 ] 00:04:21.914 EAL: No free 2048 kB hugepages reported on node 1 00:04:21.914 [2024-05-15 00:51:34.049564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.914 [2024-05-15 00:51:34.168822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.286 test_start 00:04:23.286 oneshot 00:04:23.286 tick 100 00:04:23.286 tick 100 00:04:23.286 tick 250 00:04:23.286 tick 100 00:04:23.286 tick 100 00:04:23.286 tick 100 00:04:23.286 tick 250 00:04:23.286 tick 500 00:04:23.286 tick 100 00:04:23.286 tick 100 00:04:23.286 tick 250 00:04:23.286 tick 100 00:04:23.286 tick 100 00:04:23.286 test_end 00:04:23.286 00:04:23.286 real 0m1.330s 00:04:23.286 user 0m1.228s 00:04:23.286 sys 0m0.097s 00:04:23.286 00:51:35 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:23.286 00:51:35 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:23.286 ************************************ 00:04:23.286 END TEST event_reactor 00:04:23.286 ************************************ 00:04:23.286 00:51:35 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:23.286 00:51:35 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:04:23.286 00:51:35 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:23.286 00:51:35 event -- common/autotest_common.sh@10 -- # set +x 00:04:23.286 ************************************ 00:04:23.286 START TEST event_reactor_perf 00:04:23.286 ************************************ 00:04:23.286 00:51:35 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:23.286 [2024-05-15 00:51:35.352840] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:04:23.286 [2024-05-15 00:51:35.352913] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1129858 ] 00:04:23.286 EAL: No free 2048 kB hugepages reported on node 1 00:04:23.286 [2024-05-15 00:51:35.425831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.286 [2024-05-15 00:51:35.543919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.658 test_start 00:04:24.658 test_end 00:04:24.658 Performance: 357890 events per second 00:04:24.658 00:04:24.658 real 0m1.324s 00:04:24.658 user 0m1.232s 00:04:24.658 sys 0m0.088s 00:04:24.658 00:51:36 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:24.658 00:51:36 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:24.658 ************************************ 00:04:24.658 END TEST event_reactor_perf 00:04:24.658 ************************************ 00:04:24.658 00:51:36 event -- event/event.sh@49 -- # uname -s 00:04:24.658 00:51:36 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:24.658 00:51:36 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:24.658 00:51:36 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:24.658 00:51:36 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:24.658 00:51:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:24.658 ************************************ 00:04:24.658 START TEST event_scheduler 00:04:24.658 ************************************ 00:04:24.658 00:51:36 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:24.658 * Looking for test storage... 00:04:24.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:24.658 00:51:36 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:24.658 00:51:36 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1130059 00:04:24.658 00:51:36 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:24.658 00:51:36 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:24.658 00:51:36 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1130059 00:04:24.658 00:51:36 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 1130059 ']' 00:04:24.658 00:51:36 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.658 00:51:36 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:24.658 00:51:36 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.658 00:51:36 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:24.658 00:51:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:24.658 [2024-05-15 00:51:36.815781] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:04:24.658 [2024-05-15 00:51:36.815854] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1130059 ] 00:04:24.658 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.658 [2024-05-15 00:51:36.883333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:24.658 [2024-05-15 00:51:36.993317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.658 [2024-05-15 00:51:36.993380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.658 [2024-05-15 00:51:36.993446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:24.658 [2024-05-15 00:51:36.993449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:25.629 00:51:37 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:25.629 00:51:37 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:04:25.629 00:51:37 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:25.629 00:51:37 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.629 00:51:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:25.629 POWER: Env isn't set yet! 00:04:25.629 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:25.629 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:04:25.629 POWER: Cannot get available frequencies of lcore 0 00:04:25.629 POWER: Attempting to initialise PSTAT power management... 00:04:25.629 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:25.629 POWER: Initialized successfully for lcore 0 power management 00:04:25.629 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:25.629 POWER: Initialized successfully for lcore 1 power management 00:04:25.629 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:25.629 POWER: Initialized successfully for lcore 2 power management 00:04:25.629 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:25.629 POWER: Initialized successfully for lcore 3 power management 00:04:25.629 00:51:37 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.629 00:51:37 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:25.629 00:51:37 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.629 00:51:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:25.629 [2024-05-15 00:51:37.920622] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:25.629 00:51:37 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.630 00:51:37 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:25.630 00:51:37 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:25.630 00:51:37 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:25.630 00:51:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:25.630 ************************************ 00:04:25.630 START TEST scheduler_create_thread 00:04:25.630 ************************************ 00:04:25.630 00:51:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:04:25.630 00:51:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:25.630 00:51:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.630 00:51:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.630 2 00:04:25.630 00:51:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.630 00:51:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:25.630 00:51:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.630 00:51:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.630 3 00:04:25.630 00:51:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.630 00:51:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:25.630 00:51:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.630 00:51:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.630 4 00:04:25.630 00:51:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.630 00:51:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:25.630 00:51:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.630 00:51:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.630 5 00:04:25.630 00:51:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.630 00:51:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:25.630 00:51:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.630 00:51:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.630 6 00:04:25.630 00:51:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.630 00:51:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:25.630 00:51:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.630 00:51:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.630 7 00:04:25.630 00:51:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.630 00:51:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:25.630 00:51:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.630 00:51:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.888 8 00:04:25.888 00:51:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.889 00:51:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:25.889 00:51:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.889 00:51:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.889 9 00:04:25.889 00:51:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.889 00:51:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:25.889 00:51:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.889 00:51:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.889 10 00:04:25.889 00:51:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.889 00:51:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:25.889 00:51:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.889 00:51:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.889 00:51:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.889 00:51:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:25.889 00:51:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:25.889 00:51:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.889 00:51:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.889 00:51:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.889 00:51:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:25.889 00:51:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.889 00:51:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:26.454 00:51:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:26.454 00:51:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:26.454 00:51:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:26.454 00:51:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:26.454 00:51:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:27.386 00:51:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:27.386 00:04:27.386 real 0m1.753s 00:04:27.386 user 0m0.010s 00:04:27.386 sys 0m0.004s 00:04:27.386 00:51:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:27.386 00:51:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:27.386 ************************************ 00:04:27.386 END TEST scheduler_create_thread 00:04:27.386 ************************************ 00:04:27.386 00:51:39 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:27.386 00:51:39 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1130059 00:04:27.386 00:51:39 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 1130059 ']' 00:04:27.386 00:51:39 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 1130059 00:04:27.386 00:51:39 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:04:27.386 00:51:39 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:27.386 00:51:39 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1130059 00:04:27.386 00:51:39 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:04:27.386 00:51:39 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:04:27.386 00:51:39 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1130059' 00:04:27.386 killing process with pid 1130059 00:04:27.386 00:51:39 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 1130059 00:04:27.386 00:51:39 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 1130059 00:04:27.953 [2024-05-15 00:51:40.188455] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:27.953 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:04:27.953 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:27.953 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:04:27.953 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:27.953 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:04:27.953 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:27.953 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:04:27.953 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:28.210 00:04:28.210 real 0m3.737s 00:04:28.210 user 0m7.018s 00:04:28.210 sys 0m0.385s 00:04:28.210 00:51:40 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:28.210 00:51:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:28.210 ************************************ 00:04:28.210 END TEST event_scheduler 00:04:28.210 ************************************ 00:04:28.210 00:51:40 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:28.210 00:51:40 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:28.210 00:51:40 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:28.210 00:51:40 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:28.210 00:51:40 event -- common/autotest_common.sh@10 -- # set +x 00:04:28.210 ************************************ 00:04:28.210 START TEST app_repeat 00:04:28.210 ************************************ 00:04:28.210 00:51:40 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:04:28.210 00:51:40 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:28.210 00:51:40 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:28.210 00:51:40 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:28.210 00:51:40 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:28.210 00:51:40 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:28.210 00:51:40 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:28.210 00:51:40 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:28.210 00:51:40 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1130516 00:04:28.210 00:51:40 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:28.210 00:51:40 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:28.211 00:51:40 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1130516' 00:04:28.211 Process app_repeat pid: 1130516 00:04:28.211 00:51:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:28.211 00:51:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:28.211 spdk_app_start Round 0 00:04:28.211 00:51:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1130516 /var/tmp/spdk-nbd.sock 00:04:28.211 00:51:40 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1130516 ']' 00:04:28.211 00:51:40 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:28.211 00:51:40 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:28.211 00:51:40 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:28.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:28.211 00:51:40 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:28.211 00:51:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:28.211 [2024-05-15 00:51:40.546743] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:04:28.211 [2024-05-15 00:51:40.546807] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1130516 ] 00:04:28.211 EAL: No free 2048 kB hugepages reported on node 1 00:04:28.468 [2024-05-15 00:51:40.622545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:28.468 [2024-05-15 00:51:40.738077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:28.468 [2024-05-15 00:51:40.738082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.468 00:51:40 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:28.468 00:51:40 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:28.468 00:51:40 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:28.725 Malloc0 00:04:28.983 00:51:41 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:28.983 Malloc1 00:04:29.240 00:51:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:29.240 00:51:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.240 00:51:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.240 00:51:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:29.240 00:51:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.240 00:51:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:29.240 00:51:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:29.240 00:51:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.240 00:51:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.240 00:51:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:29.240 00:51:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.240 00:51:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:29.240 00:51:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:29.240 00:51:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:29.240 00:51:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.240 00:51:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:29.240 /dev/nbd0 00:04:29.498 00:51:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:29.498 00:51:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:29.498 00:51:41 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:04:29.498 00:51:41 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:29.498 00:51:41 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:29.498 00:51:41 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:29.498 00:51:41 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:04:29.498 00:51:41 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:29.498 00:51:41 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:29.498 00:51:41 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:29.498 00:51:41 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:29.498 1+0 records in 00:04:29.498 1+0 records out 00:04:29.498 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179115 s, 22.9 MB/s 00:04:29.499 00:51:41 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.499 00:51:41 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:29.499 00:51:41 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.499 00:51:41 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:29.499 00:51:41 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:29.499 00:51:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:29.499 00:51:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.499 00:51:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:29.499 /dev/nbd1 00:04:29.757 00:51:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:29.757 00:51:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:29.757 00:51:41 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:04:29.757 00:51:41 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:29.757 00:51:41 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:29.757 00:51:41 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:29.757 00:51:41 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:04:29.757 00:51:41 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:29.757 00:51:41 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:29.757 00:51:41 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:29.757 00:51:41 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:29.757 1+0 records in 00:04:29.757 1+0 records out 00:04:29.757 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000171095 s, 23.9 MB/s 00:04:29.757 00:51:41 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.757 00:51:41 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:29.757 00:51:41 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.757 00:51:41 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:29.757 00:51:41 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:29.757 00:51:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:29.757 00:51:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.757 00:51:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:29.757 00:51:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.757 00:51:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:30.015 { 00:04:30.015 "nbd_device": "/dev/nbd0", 00:04:30.015 "bdev_name": "Malloc0" 00:04:30.015 }, 00:04:30.015 { 00:04:30.015 "nbd_device": "/dev/nbd1", 00:04:30.015 "bdev_name": "Malloc1" 00:04:30.015 } 00:04:30.015 ]' 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:30.015 { 00:04:30.015 "nbd_device": "/dev/nbd0", 00:04:30.015 "bdev_name": "Malloc0" 00:04:30.015 }, 00:04:30.015 { 00:04:30.015 "nbd_device": "/dev/nbd1", 00:04:30.015 "bdev_name": "Malloc1" 00:04:30.015 } 00:04:30.015 ]' 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:30.015 /dev/nbd1' 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:30.015 /dev/nbd1' 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:30.015 256+0 records in 00:04:30.015 256+0 records out 00:04:30.015 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00379604 s, 276 MB/s 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:30.015 256+0 records in 00:04:30.015 256+0 records out 00:04:30.015 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238286 s, 44.0 MB/s 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:30.015 256+0 records in 00:04:30.015 256+0 records out 00:04:30.015 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228074 s, 46.0 MB/s 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:30.015 00:51:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:30.272 00:51:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:30.272 00:51:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:30.272 00:51:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:30.272 00:51:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:30.272 00:51:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:30.272 00:51:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:30.272 00:51:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:30.272 00:51:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:30.272 00:51:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:30.272 00:51:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:30.529 00:51:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:30.529 00:51:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:30.529 00:51:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:30.529 00:51:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:30.529 00:51:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:30.529 00:51:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:30.529 00:51:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:30.529 00:51:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:30.530 00:51:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:30.530 00:51:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.530 00:51:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:30.787 00:51:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:30.787 00:51:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:30.787 00:51:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:30.787 00:51:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:30.787 00:51:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:30.787 00:51:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:30.787 00:51:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:30.787 00:51:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:30.787 00:51:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:30.787 00:51:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:30.787 00:51:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:30.787 00:51:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:30.787 00:51:43 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:31.045 00:51:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:31.302 [2024-05-15 00:51:43.609907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:31.559 [2024-05-15 00:51:43.726861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.559 [2024-05-15 00:51:43.726863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.559 [2024-05-15 00:51:43.788884] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:31.559 [2024-05-15 00:51:43.788997] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:34.082 00:51:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:34.082 00:51:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:34.082 spdk_app_start Round 1 00:04:34.082 00:51:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1130516 /var/tmp/spdk-nbd.sock 00:04:34.082 00:51:46 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1130516 ']' 00:04:34.082 00:51:46 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:34.082 00:51:46 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:34.082 00:51:46 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:34.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:34.082 00:51:46 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:34.082 00:51:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:34.339 00:51:46 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:34.339 00:51:46 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:34.339 00:51:46 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:34.597 Malloc0 00:04:34.598 00:51:46 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:34.855 Malloc1 00:04:34.855 00:51:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:34.855 00:51:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.855 00:51:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:34.855 00:51:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:34.855 00:51:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.855 00:51:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:34.855 00:51:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:34.855 00:51:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.855 00:51:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:34.855 00:51:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:34.855 00:51:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.855 00:51:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:34.855 00:51:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:34.855 00:51:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:34.855 00:51:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.855 00:51:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:35.113 /dev/nbd0 00:04:35.113 00:51:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:35.113 00:51:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:35.113 00:51:47 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:04:35.113 00:51:47 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:35.113 00:51:47 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:35.113 00:51:47 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:35.113 00:51:47 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:04:35.113 00:51:47 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:35.113 00:51:47 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:35.113 00:51:47 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:35.113 00:51:47 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:35.113 1+0 records in 00:04:35.113 1+0 records out 00:04:35.113 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185193 s, 22.1 MB/s 00:04:35.113 00:51:47 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:35.113 00:51:47 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:35.113 00:51:47 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:35.113 00:51:47 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:35.113 00:51:47 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:35.113 00:51:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:35.114 00:51:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:35.114 00:51:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:35.372 /dev/nbd1 00:04:35.372 00:51:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:35.372 00:51:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:35.372 00:51:47 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:04:35.372 00:51:47 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:35.372 00:51:47 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:35.372 00:51:47 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:35.372 00:51:47 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:04:35.372 00:51:47 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:35.372 00:51:47 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:35.372 00:51:47 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:35.372 00:51:47 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:35.372 1+0 records in 00:04:35.372 1+0 records out 00:04:35.372 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188493 s, 21.7 MB/s 00:04:35.372 00:51:47 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:35.372 00:51:47 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:35.372 00:51:47 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:35.372 00:51:47 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:35.372 00:51:47 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:35.372 00:51:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:35.372 00:51:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:35.372 00:51:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:35.372 00:51:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.372 00:51:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:35.630 00:51:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:35.630 { 00:04:35.630 "nbd_device": "/dev/nbd0", 00:04:35.630 "bdev_name": "Malloc0" 00:04:35.630 }, 00:04:35.630 { 00:04:35.630 "nbd_device": "/dev/nbd1", 00:04:35.630 "bdev_name": "Malloc1" 00:04:35.630 } 00:04:35.630 ]' 00:04:35.630 00:51:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:35.630 { 00:04:35.630 "nbd_device": "/dev/nbd0", 00:04:35.630 "bdev_name": "Malloc0" 00:04:35.630 }, 00:04:35.630 { 00:04:35.630 "nbd_device": "/dev/nbd1", 00:04:35.630 "bdev_name": "Malloc1" 00:04:35.630 } 00:04:35.630 ]' 00:04:35.630 00:51:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:35.630 00:51:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:35.630 /dev/nbd1' 00:04:35.630 00:51:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:35.630 /dev/nbd1' 00:04:35.630 00:51:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:35.630 00:51:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:35.630 00:51:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:35.630 00:51:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:35.630 00:51:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:35.630 00:51:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:35.630 00:51:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.630 00:51:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:35.630 00:51:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:35.630 00:51:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:35.630 00:51:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:35.630 00:51:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:35.630 256+0 records in 00:04:35.630 256+0 records out 00:04:35.630 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00500183 s, 210 MB/s 00:04:35.630 00:51:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:35.630 00:51:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:35.630 256+0 records in 00:04:35.630 256+0 records out 00:04:35.631 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206102 s, 50.9 MB/s 00:04:35.631 00:51:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:35.631 00:51:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:35.631 256+0 records in 00:04:35.631 256+0 records out 00:04:35.631 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228668 s, 45.9 MB/s 00:04:35.631 00:51:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:35.631 00:51:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.631 00:51:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:35.631 00:51:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:35.631 00:51:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:35.631 00:51:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:35.631 00:51:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:35.631 00:51:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:35.631 00:51:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:35.631 00:51:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:35.631 00:51:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:35.631 00:51:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:35.631 00:51:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:35.631 00:51:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.631 00:51:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.631 00:51:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:35.631 00:51:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:35.631 00:51:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:35.631 00:51:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:35.888 00:51:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:35.888 00:51:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:35.888 00:51:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:35.888 00:51:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:35.888 00:51:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:35.888 00:51:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:35.888 00:51:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:35.888 00:51:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:35.888 00:51:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:35.888 00:51:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:36.146 00:51:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:36.146 00:51:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:36.146 00:51:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:36.146 00:51:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:36.146 00:51:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:36.146 00:51:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:36.146 00:51:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:36.146 00:51:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:36.146 00:51:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:36.146 00:51:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.146 00:51:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:36.403 00:51:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:36.403 00:51:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:36.403 00:51:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:36.661 00:51:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:36.661 00:51:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:36.661 00:51:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:36.661 00:51:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:36.661 00:51:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:36.661 00:51:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:36.661 00:51:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:36.661 00:51:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:36.661 00:51:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:36.661 00:51:48 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:36.918 00:51:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:37.176 [2024-05-15 00:51:49.342827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:37.176 [2024-05-15 00:51:49.459765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.176 [2024-05-15 00:51:49.459769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.176 [2024-05-15 00:51:49.522550] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:37.176 [2024-05-15 00:51:49.522631] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:39.702 00:51:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:39.702 00:51:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:39.702 spdk_app_start Round 2 00:04:39.702 00:51:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1130516 /var/tmp/spdk-nbd.sock 00:04:39.702 00:51:52 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1130516 ']' 00:04:39.702 00:51:52 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:39.702 00:51:52 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:39.702 00:51:52 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:39.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:39.702 00:51:52 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:39.702 00:51:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:39.991 00:51:52 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:39.991 00:51:52 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:39.991 00:51:52 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:40.249 Malloc0 00:04:40.249 00:51:52 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:40.506 Malloc1 00:04:40.506 00:51:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:40.506 00:51:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.506 00:51:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.506 00:51:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:40.506 00:51:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.506 00:51:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:40.506 00:51:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:40.506 00:51:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.506 00:51:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.506 00:51:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:40.506 00:51:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.506 00:51:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:40.506 00:51:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:40.506 00:51:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:40.506 00:51:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.506 00:51:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:40.764 /dev/nbd0 00:04:40.764 00:51:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:40.764 00:51:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:40.764 00:51:53 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:04:40.764 00:51:53 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:40.764 00:51:53 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:40.764 00:51:53 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:40.764 00:51:53 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:04:40.764 00:51:53 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:40.764 00:51:53 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:40.764 00:51:53 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:40.764 00:51:53 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.764 1+0 records in 00:04:40.764 1+0 records out 00:04:40.764 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187917 s, 21.8 MB/s 00:04:40.764 00:51:53 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.764 00:51:53 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:40.764 00:51:53 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.764 00:51:53 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:40.764 00:51:53 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:40.764 00:51:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.764 00:51:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.764 00:51:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:41.021 /dev/nbd1 00:04:41.021 00:51:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:41.021 00:51:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:41.021 00:51:53 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:04:41.021 00:51:53 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:41.021 00:51:53 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:41.021 00:51:53 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:41.021 00:51:53 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:04:41.021 00:51:53 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:41.021 00:51:53 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:41.021 00:51:53 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:41.022 00:51:53 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:41.022 1+0 records in 00:04:41.022 1+0 records out 00:04:41.022 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000161056 s, 25.4 MB/s 00:04:41.022 00:51:53 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:41.022 00:51:53 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:41.022 00:51:53 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:41.022 00:51:53 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:41.022 00:51:53 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:41.022 00:51:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:41.022 00:51:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:41.022 00:51:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:41.022 00:51:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.022 00:51:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:41.281 00:51:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:41.281 { 00:04:41.281 "nbd_device": "/dev/nbd0", 00:04:41.281 "bdev_name": "Malloc0" 00:04:41.281 }, 00:04:41.281 { 00:04:41.281 "nbd_device": "/dev/nbd1", 00:04:41.281 "bdev_name": "Malloc1" 00:04:41.281 } 00:04:41.281 ]' 00:04:41.281 00:51:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:41.281 { 00:04:41.281 "nbd_device": "/dev/nbd0", 00:04:41.281 "bdev_name": "Malloc0" 00:04:41.281 }, 00:04:41.281 { 00:04:41.281 "nbd_device": "/dev/nbd1", 00:04:41.281 "bdev_name": "Malloc1" 00:04:41.281 } 00:04:41.281 ]' 00:04:41.281 00:51:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:41.281 00:51:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:41.281 /dev/nbd1' 00:04:41.281 00:51:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:41.281 /dev/nbd1' 00:04:41.281 00:51:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:41.281 00:51:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:41.281 00:51:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:41.281 00:51:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:41.281 00:51:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:41.281 00:51:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:41.281 00:51:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.281 00:51:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.282 00:51:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:41.282 00:51:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.282 00:51:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:41.282 00:51:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:41.282 256+0 records in 00:04:41.282 256+0 records out 00:04:41.282 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00377925 s, 277 MB/s 00:04:41.282 00:51:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.282 00:51:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:41.539 256+0 records in 00:04:41.539 256+0 records out 00:04:41.539 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241643 s, 43.4 MB/s 00:04:41.539 00:51:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.539 00:51:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:41.539 256+0 records in 00:04:41.539 256+0 records out 00:04:41.539 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022681 s, 46.2 MB/s 00:04:41.539 00:51:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:41.539 00:51:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.539 00:51:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.539 00:51:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:41.539 00:51:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.539 00:51:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:41.539 00:51:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:41.539 00:51:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.539 00:51:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:41.539 00:51:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.539 00:51:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:41.539 00:51:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.539 00:51:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:41.539 00:51:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.539 00:51:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.539 00:51:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:41.539 00:51:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:41.539 00:51:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.539 00:51:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:41.797 00:51:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:41.797 00:51:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:41.797 00:51:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:41.797 00:51:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:41.797 00:51:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:41.797 00:51:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:41.797 00:51:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:41.797 00:51:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:41.797 00:51:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.797 00:51:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:42.055 00:51:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:42.055 00:51:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:42.055 00:51:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:42.055 00:51:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:42.055 00:51:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:42.055 00:51:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:42.055 00:51:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:42.055 00:51:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:42.055 00:51:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:42.055 00:51:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.055 00:51:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:42.312 00:51:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:42.312 00:51:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:42.312 00:51:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:42.312 00:51:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:42.312 00:51:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:42.312 00:51:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:42.312 00:51:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:42.312 00:51:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:42.312 00:51:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:42.312 00:51:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:42.312 00:51:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:42.312 00:51:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:42.312 00:51:54 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:42.570 00:51:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:42.827 [2024-05-15 00:51:55.082306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:42.827 [2024-05-15 00:51:55.197757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.827 [2024-05-15 00:51:55.197758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.085 [2024-05-15 00:51:55.260988] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:43.085 [2024-05-15 00:51:55.261064] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:45.611 00:51:57 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1130516 /var/tmp/spdk-nbd.sock 00:04:45.611 00:51:57 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 1130516 ']' 00:04:45.611 00:51:57 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:45.611 00:51:57 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:45.611 00:51:57 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:45.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:45.611 00:51:57 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:45.611 00:51:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:45.870 00:51:58 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:45.870 00:51:58 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:45.870 00:51:58 event.app_repeat -- event/event.sh@39 -- # killprocess 1130516 00:04:45.870 00:51:58 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 1130516 ']' 00:04:45.870 00:51:58 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 1130516 00:04:45.870 00:51:58 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:04:45.870 00:51:58 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:45.870 00:51:58 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1130516 00:04:45.870 00:51:58 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:45.870 00:51:58 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:45.870 00:51:58 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1130516' 00:04:45.870 killing process with pid 1130516 00:04:45.870 00:51:58 event.app_repeat -- common/autotest_common.sh@965 -- # kill 1130516 00:04:45.870 00:51:58 event.app_repeat -- common/autotest_common.sh@970 -- # wait 1130516 00:04:46.128 spdk_app_start is called in Round 0. 00:04:46.128 Shutdown signal received, stop current app iteration 00:04:46.128 Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 reinitialization... 00:04:46.128 spdk_app_start is called in Round 1. 00:04:46.128 Shutdown signal received, stop current app iteration 00:04:46.128 Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 reinitialization... 00:04:46.128 spdk_app_start is called in Round 2. 00:04:46.128 Shutdown signal received, stop current app iteration 00:04:46.128 Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 reinitialization... 00:04:46.128 spdk_app_start is called in Round 3. 00:04:46.128 Shutdown signal received, stop current app iteration 00:04:46.128 00:51:58 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:46.128 00:51:58 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:46.128 00:04:46.128 real 0m17.806s 00:04:46.128 user 0m38.740s 00:04:46.128 sys 0m3.403s 00:04:46.128 00:51:58 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:46.128 00:51:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:46.128 ************************************ 00:04:46.128 END TEST app_repeat 00:04:46.128 ************************************ 00:04:46.128 00:51:58 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:46.128 00:51:58 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:46.128 00:51:58 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:46.128 00:51:58 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:46.128 00:51:58 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.128 ************************************ 00:04:46.128 START TEST cpu_locks 00:04:46.128 ************************************ 00:04:46.128 00:51:58 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:46.128 * Looking for test storage... 00:04:46.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:46.128 00:51:58 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:46.128 00:51:58 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:46.128 00:51:58 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:46.128 00:51:58 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:46.128 00:51:58 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:46.128 00:51:58 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:46.128 00:51:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.128 ************************************ 00:04:46.128 START TEST default_locks 00:04:46.128 ************************************ 00:04:46.128 00:51:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:04:46.128 00:51:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1132869 00:04:46.128 00:51:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.128 00:51:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1132869 00:04:46.128 00:51:58 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 1132869 ']' 00:04:46.128 00:51:58 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.128 00:51:58 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:46.128 00:51:58 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.128 00:51:58 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:46.128 00:51:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.128 [2024-05-15 00:51:58.503949] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:04:46.128 [2024-05-15 00:51:58.504056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1132869 ] 00:04:46.387 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.387 [2024-05-15 00:51:58.571350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.387 [2024-05-15 00:51:58.679820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.321 00:51:59 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:47.321 00:51:59 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:04:47.321 00:51:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1132869 00:04:47.321 00:51:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1132869 00:04:47.321 00:51:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:47.321 lslocks: write error 00:04:47.321 00:51:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1132869 00:04:47.321 00:51:59 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 1132869 ']' 00:04:47.321 00:51:59 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 1132869 00:04:47.321 00:51:59 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:04:47.321 00:51:59 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:47.321 00:51:59 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1132869 00:04:47.579 00:51:59 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:47.579 00:51:59 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:47.579 00:51:59 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1132869' 00:04:47.579 killing process with pid 1132869 00:04:47.579 00:51:59 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 1132869 00:04:47.579 00:51:59 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 1132869 00:04:47.837 00:52:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1132869 00:04:47.837 00:52:00 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:04:47.837 00:52:00 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1132869 00:04:47.837 00:52:00 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:47.837 00:52:00 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:47.837 00:52:00 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:47.837 00:52:00 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:47.837 00:52:00 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1132869 00:04:47.837 00:52:00 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 1132869 ']' 00:04:47.837 00:52:00 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.837 00:52:00 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:47.837 00:52:00 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.837 00:52:00 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:47.837 00:52:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:47.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (1132869) - No such process 00:04:47.837 ERROR: process (pid: 1132869) is no longer running 00:04:47.837 00:52:00 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:47.837 00:52:00 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:04:47.837 00:52:00 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:04:47.837 00:52:00 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:47.837 00:52:00 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:47.837 00:52:00 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:47.837 00:52:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:47.837 00:52:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:47.837 00:52:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:47.837 00:52:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:47.837 00:04:47.837 real 0m1.720s 00:04:47.837 user 0m1.887s 00:04:47.837 sys 0m0.533s 00:04:47.837 00:52:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:47.837 00:52:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:47.837 ************************************ 00:04:47.837 END TEST default_locks 00:04:47.837 ************************************ 00:04:47.837 00:52:00 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:47.837 00:52:00 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:47.837 00:52:00 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:47.837 00:52:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:48.096 ************************************ 00:04:48.096 START TEST default_locks_via_rpc 00:04:48.096 ************************************ 00:04:48.096 00:52:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:04:48.096 00:52:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1133162 00:04:48.096 00:52:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:48.096 00:52:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1133162 00:04:48.096 00:52:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1133162 ']' 00:04:48.096 00:52:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.096 00:52:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:48.096 00:52:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.096 00:52:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:48.096 00:52:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.096 [2024-05-15 00:52:00.282131] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:04:48.096 [2024-05-15 00:52:00.282229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1133162 ] 00:04:48.096 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.096 [2024-05-15 00:52:00.369638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.354 [2024-05-15 00:52:00.512393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.612 00:52:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:48.612 00:52:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:48.612 00:52:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:48.612 00:52:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.612 00:52:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.612 00:52:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.612 00:52:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:48.612 00:52:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:48.612 00:52:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:48.612 00:52:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:48.612 00:52:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:48.612 00:52:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.612 00:52:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.612 00:52:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.612 00:52:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1133162 00:04:48.612 00:52:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1133162 00:04:48.612 00:52:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:48.870 00:52:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1133162 00:04:48.870 00:52:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 1133162 ']' 00:04:48.870 00:52:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 1133162 00:04:48.870 00:52:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:04:48.870 00:52:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:48.870 00:52:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1133162 00:04:48.870 00:52:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:48.870 00:52:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:48.870 00:52:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1133162' 00:04:48.870 killing process with pid 1133162 00:04:48.870 00:52:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 1133162 00:04:48.870 00:52:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 1133162 00:04:49.128 00:04:49.128 real 0m1.262s 00:04:49.128 user 0m1.268s 00:04:49.128 sys 0m0.545s 00:04:49.128 00:52:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:49.128 00:52:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.128 ************************************ 00:04:49.128 END TEST default_locks_via_rpc 00:04:49.128 ************************************ 00:04:49.128 00:52:01 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:49.128 00:52:01 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:49.128 00:52:01 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:49.128 00:52:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.386 ************************************ 00:04:49.386 START TEST non_locking_app_on_locked_coremask 00:04:49.386 ************************************ 00:04:49.386 00:52:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:04:49.386 00:52:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1133322 00:04:49.386 00:52:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:49.386 00:52:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1133322 /var/tmp/spdk.sock 00:04:49.386 00:52:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1133322 ']' 00:04:49.386 00:52:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.386 00:52:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:49.386 00:52:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.386 00:52:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:49.386 00:52:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:49.386 [2024-05-15 00:52:01.599639] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:04:49.386 [2024-05-15 00:52:01.599744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1133322 ] 00:04:49.386 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.386 [2024-05-15 00:52:01.672997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.644 [2024-05-15 00:52:01.786060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.209 00:52:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:50.209 00:52:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:50.209 00:52:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1133460 00:04:50.209 00:52:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:50.209 00:52:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1133460 /var/tmp/spdk2.sock 00:04:50.209 00:52:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1133460 ']' 00:04:50.209 00:52:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:50.209 00:52:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:50.209 00:52:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:50.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:50.209 00:52:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:50.209 00:52:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:50.209 [2024-05-15 00:52:02.578345] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:04:50.209 [2024-05-15 00:52:02.578443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1133460 ] 00:04:50.467 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.467 [2024-05-15 00:52:02.689614] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:50.467 [2024-05-15 00:52:02.689655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.725 [2024-05-15 00:52:02.928275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.289 00:52:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:51.290 00:52:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:51.290 00:52:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1133322 00:04:51.290 00:52:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1133322 00:04:51.290 00:52:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:51.855 lslocks: write error 00:04:51.855 00:52:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1133322 00:04:51.855 00:52:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1133322 ']' 00:04:51.855 00:52:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 1133322 00:04:51.855 00:52:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:04:51.855 00:52:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:51.855 00:52:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1133322 00:04:51.855 00:52:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:51.855 00:52:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:51.855 00:52:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1133322' 00:04:51.855 killing process with pid 1133322 00:04:51.855 00:52:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 1133322 00:04:51.855 00:52:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 1133322 00:04:52.788 00:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1133460 00:04:52.788 00:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1133460 ']' 00:04:52.788 00:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 1133460 00:04:52.788 00:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:04:52.788 00:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:52.788 00:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1133460 00:04:52.788 00:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:52.788 00:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:52.788 00:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1133460' 00:04:52.788 killing process with pid 1133460 00:04:52.788 00:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 1133460 00:04:52.788 00:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 1133460 00:04:53.046 00:04:53.046 real 0m3.847s 00:04:53.046 user 0m4.182s 00:04:53.046 sys 0m1.084s 00:04:53.046 00:52:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:53.046 00:52:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.046 ************************************ 00:04:53.046 END TEST non_locking_app_on_locked_coremask 00:04:53.046 ************************************ 00:04:53.046 00:52:05 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:53.046 00:52:05 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:53.046 00:52:05 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:53.046 00:52:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:53.304 ************************************ 00:04:53.304 START TEST locking_app_on_unlocked_coremask 00:04:53.304 ************************************ 00:04:53.304 00:52:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:04:53.304 00:52:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1133771 00:04:53.304 00:52:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:53.304 00:52:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1133771 /var/tmp/spdk.sock 00:04:53.304 00:52:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1133771 ']' 00:04:53.304 00:52:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.304 00:52:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:53.304 00:52:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.304 00:52:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:53.304 00:52:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.304 [2024-05-15 00:52:05.498071] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:04:53.304 [2024-05-15 00:52:05.498157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1133771 ] 00:04:53.304 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.304 [2024-05-15 00:52:05.566460] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:53.304 [2024-05-15 00:52:05.566506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.304 [2024-05-15 00:52:05.675152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.564 00:52:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:53.564 00:52:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:53.564 00:52:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1133894 00:04:53.564 00:52:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:53.564 00:52:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1133894 /var/tmp/spdk2.sock 00:04:53.564 00:52:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1133894 ']' 00:04:53.564 00:52:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:53.564 00:52:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:53.564 00:52:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:53.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:53.564 00:52:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:53.564 00:52:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.856 [2024-05-15 00:52:05.976725] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:04:53.856 [2024-05-15 00:52:05.976800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1133894 ] 00:04:53.856 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.856 [2024-05-15 00:52:06.087654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.114 [2024-05-15 00:52:06.322617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.681 00:52:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:54.681 00:52:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:54.681 00:52:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1133894 00:04:54.681 00:52:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1133894 00:04:54.681 00:52:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:54.940 lslocks: write error 00:04:54.940 00:52:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1133771 00:04:54.940 00:52:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1133771 ']' 00:04:54.940 00:52:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 1133771 00:04:54.940 00:52:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:04:54.940 00:52:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:54.940 00:52:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1133771 00:04:54.940 00:52:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:54.940 00:52:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:54.940 00:52:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1133771' 00:04:54.940 killing process with pid 1133771 00:04:54.940 00:52:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 1133771 00:04:54.940 00:52:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 1133771 00:04:55.875 00:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1133894 00:04:55.875 00:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1133894 ']' 00:04:55.875 00:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 1133894 00:04:55.875 00:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:04:55.875 00:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:55.875 00:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1133894 00:04:56.133 00:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:56.133 00:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:56.133 00:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1133894' 00:04:56.133 killing process with pid 1133894 00:04:56.133 00:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 1133894 00:04:56.133 00:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 1133894 00:04:56.391 00:04:56.391 real 0m3.287s 00:04:56.391 user 0m3.445s 00:04:56.391 sys 0m0.995s 00:04:56.391 00:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:56.391 00:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.391 ************************************ 00:04:56.391 END TEST locking_app_on_unlocked_coremask 00:04:56.391 ************************************ 00:04:56.391 00:52:08 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:56.391 00:52:08 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:56.391 00:52:08 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:56.391 00:52:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:56.648 ************************************ 00:04:56.648 START TEST locking_app_on_locked_coremask 00:04:56.648 ************************************ 00:04:56.648 00:52:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:04:56.649 00:52:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1134207 00:04:56.649 00:52:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:56.649 00:52:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1134207 /var/tmp/spdk.sock 00:04:56.649 00:52:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1134207 ']' 00:04:56.649 00:52:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.649 00:52:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:56.649 00:52:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.649 00:52:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:56.649 00:52:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.649 [2024-05-15 00:52:08.838893] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:04:56.649 [2024-05-15 00:52:08.839008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1134207 ] 00:04:56.649 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.649 [2024-05-15 00:52:08.906564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.649 [2024-05-15 00:52:09.016344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.906 00:52:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:56.906 00:52:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:56.906 00:52:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1134331 00:04:56.906 00:52:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:56.906 00:52:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1134331 /var/tmp/spdk2.sock 00:04:56.906 00:52:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:04:56.906 00:52:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1134331 /var/tmp/spdk2.sock 00:04:56.906 00:52:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:56.906 00:52:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:56.906 00:52:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:56.906 00:52:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:56.906 00:52:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1134331 /var/tmp/spdk2.sock 00:04:56.906 00:52:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 1134331 ']' 00:04:56.906 00:52:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:56.906 00:52:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:56.906 00:52:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:56.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:56.906 00:52:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:56.906 00:52:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.163 [2024-05-15 00:52:09.314585] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:04:57.163 [2024-05-15 00:52:09.314656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1134331 ] 00:04:57.163 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.163 [2024-05-15 00:52:09.425923] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1134207 has claimed it. 00:04:57.163 [2024-05-15 00:52:09.425997] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:57.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (1134331) - No such process 00:04:57.728 ERROR: process (pid: 1134331) is no longer running 00:04:57.728 00:52:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:57.728 00:52:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:04:57.728 00:52:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:04:57.728 00:52:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:57.728 00:52:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:57.728 00:52:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:57.728 00:52:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1134207 00:04:57.728 00:52:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1134207 00:04:57.728 00:52:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:57.985 lslocks: write error 00:04:57.985 00:52:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1134207 00:04:57.985 00:52:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 1134207 ']' 00:04:57.985 00:52:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 1134207 00:04:57.985 00:52:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:04:57.985 00:52:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:57.985 00:52:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1134207 00:04:57.985 00:52:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:57.985 00:52:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:57.985 00:52:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1134207' 00:04:57.985 killing process with pid 1134207 00:04:57.985 00:52:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 1134207 00:04:57.985 00:52:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 1134207 00:04:58.551 00:04:58.551 real 0m1.999s 00:04:58.551 user 0m2.117s 00:04:58.551 sys 0m0.669s 00:04:58.551 00:52:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:58.551 00:52:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.551 ************************************ 00:04:58.551 END TEST locking_app_on_locked_coremask 00:04:58.551 ************************************ 00:04:58.551 00:52:10 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:58.551 00:52:10 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:58.551 00:52:10 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:58.551 00:52:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.551 ************************************ 00:04:58.551 START TEST locking_overlapped_coremask 00:04:58.551 ************************************ 00:04:58.551 00:52:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:04:58.551 00:52:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1134500 00:04:58.551 00:52:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:58.551 00:52:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1134500 /var/tmp/spdk.sock 00:04:58.551 00:52:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 1134500 ']' 00:04:58.551 00:52:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.551 00:52:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:58.551 00:52:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.551 00:52:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:58.551 00:52:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.551 [2024-05-15 00:52:10.896207] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:04:58.551 [2024-05-15 00:52:10.896303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1134500 ] 00:04:58.551 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.809 [2024-05-15 00:52:10.965572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:58.809 [2024-05-15 00:52:11.075576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.809 [2024-05-15 00:52:11.075691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.809 [2024-05-15 00:52:11.075694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.066 00:52:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:59.066 00:52:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:59.066 00:52:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1134592 00:04:59.066 00:52:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1134592 /var/tmp/spdk2.sock 00:04:59.066 00:52:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:04:59.066 00:52:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:59.066 00:52:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1134592 /var/tmp/spdk2.sock 00:04:59.066 00:52:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:59.066 00:52:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:59.066 00:52:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:59.066 00:52:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:59.066 00:52:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1134592 /var/tmp/spdk2.sock 00:04:59.066 00:52:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 1134592 ']' 00:04:59.066 00:52:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:59.066 00:52:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:59.066 00:52:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:59.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:59.066 00:52:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:59.067 00:52:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.067 [2024-05-15 00:52:11.378417] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:04:59.067 [2024-05-15 00:52:11.378514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1134592 ] 00:04:59.067 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.323 [2024-05-15 00:52:11.483271] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1134500 has claimed it. 00:04:59.323 [2024-05-15 00:52:11.483346] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:59.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (1134592) - No such process 00:04:59.889 ERROR: process (pid: 1134592) is no longer running 00:04:59.889 00:52:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:59.889 00:52:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:04:59.889 00:52:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:04:59.889 00:52:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:59.889 00:52:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:59.889 00:52:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:59.889 00:52:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:59.889 00:52:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:59.889 00:52:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:59.889 00:52:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:59.889 00:52:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1134500 00:04:59.889 00:52:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 1134500 ']' 00:04:59.889 00:52:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 1134500 00:04:59.889 00:52:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:04:59.889 00:52:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:59.889 00:52:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1134500 00:04:59.889 00:52:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:59.889 00:52:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:59.889 00:52:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1134500' 00:04:59.889 killing process with pid 1134500 00:04:59.889 00:52:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 1134500 00:04:59.889 00:52:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 1134500 00:05:00.455 00:05:00.455 real 0m1.704s 00:05:00.455 user 0m4.482s 00:05:00.455 sys 0m0.469s 00:05:00.455 00:52:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:00.455 00:52:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.455 ************************************ 00:05:00.455 END TEST locking_overlapped_coremask 00:05:00.455 ************************************ 00:05:00.455 00:52:12 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:00.455 00:52:12 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:00.455 00:52:12 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:00.455 00:52:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.455 ************************************ 00:05:00.455 START TEST locking_overlapped_coremask_via_rpc 00:05:00.455 ************************************ 00:05:00.455 00:52:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:05:00.455 00:52:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1134800 00:05:00.455 00:52:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:00.455 00:52:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1134800 /var/tmp/spdk.sock 00:05:00.455 00:52:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1134800 ']' 00:05:00.455 00:52:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.455 00:52:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:00.455 00:52:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.455 00:52:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:00.455 00:52:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.455 [2024-05-15 00:52:12.658015] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:00.455 [2024-05-15 00:52:12.658119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1134800 ] 00:05:00.455 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.455 [2024-05-15 00:52:12.731927] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:00.455 [2024-05-15 00:52:12.731975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:00.713 [2024-05-15 00:52:12.848972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.713 [2024-05-15 00:52:12.849027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:00.713 [2024-05-15 00:52:12.849031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.277 00:52:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:01.277 00:52:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:01.277 00:52:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1134916 00:05:01.277 00:52:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1134916 /var/tmp/spdk2.sock 00:05:01.277 00:52:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:01.277 00:52:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1134916 ']' 00:05:01.277 00:52:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:01.277 00:52:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:01.277 00:52:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:01.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:01.277 00:52:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:01.277 00:52:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.277 [2024-05-15 00:52:13.640006] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:01.277 [2024-05-15 00:52:13.640088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1134916 ] 00:05:01.534 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.534 [2024-05-15 00:52:13.741899] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:01.534 [2024-05-15 00:52:13.741943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:01.792 [2024-05-15 00:52:13.966468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:01.792 [2024-05-15 00:52:13.969990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:01.792 [2024-05-15 00:52:13.969993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.357 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:02.357 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:02.357 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:02.357 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.357 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.357 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.358 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:02.358 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:02.358 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:02.358 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:02.358 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:02.358 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:02.358 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:02.358 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:02.358 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.358 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.358 [2024-05-15 00:52:14.600033] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1134800 has claimed it. 00:05:02.358 request: 00:05:02.358 { 00:05:02.358 "method": "framework_enable_cpumask_locks", 00:05:02.358 "req_id": 1 00:05:02.358 } 00:05:02.358 Got JSON-RPC error response 00:05:02.358 response: 00:05:02.358 { 00:05:02.358 "code": -32603, 00:05:02.358 "message": "Failed to claim CPU core: 2" 00:05:02.358 } 00:05:02.358 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:02.358 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:02.358 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:02.358 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:02.358 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:02.358 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1134800 /var/tmp/spdk.sock 00:05:02.358 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1134800 ']' 00:05:02.358 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.358 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:02.358 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.358 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:02.358 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.615 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:02.615 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:02.615 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1134916 /var/tmp/spdk2.sock 00:05:02.615 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 1134916 ']' 00:05:02.615 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:02.615 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:02.615 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:02.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:02.615 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:02.615 00:52:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.873 00:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:02.873 00:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:02.873 00:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:02.873 00:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:02.873 00:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:02.873 00:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:02.873 00:05:02.873 real 0m2.496s 00:05:02.873 user 0m1.222s 00:05:02.873 sys 0m0.202s 00:05:02.873 00:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:02.873 00:52:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.873 ************************************ 00:05:02.873 END TEST locking_overlapped_coremask_via_rpc 00:05:02.873 ************************************ 00:05:02.873 00:52:15 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:02.873 00:52:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1134800 ]] 00:05:02.873 00:52:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1134800 00:05:02.873 00:52:15 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1134800 ']' 00:05:02.873 00:52:15 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1134800 00:05:02.873 00:52:15 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:05:02.873 00:52:15 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:02.873 00:52:15 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1134800 00:05:02.873 00:52:15 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:02.873 00:52:15 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:02.873 00:52:15 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1134800' 00:05:02.873 killing process with pid 1134800 00:05:02.873 00:52:15 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 1134800 00:05:02.873 00:52:15 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 1134800 00:05:03.438 00:52:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1134916 ]] 00:05:03.438 00:52:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1134916 00:05:03.438 00:52:15 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1134916 ']' 00:05:03.438 00:52:15 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1134916 00:05:03.438 00:52:15 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:05:03.438 00:52:15 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:03.438 00:52:15 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1134916 00:05:03.438 00:52:15 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:05:03.438 00:52:15 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:05:03.438 00:52:15 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1134916' 00:05:03.438 killing process with pid 1134916 00:05:03.438 00:52:15 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 1134916 00:05:03.438 00:52:15 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 1134916 00:05:03.696 00:52:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:03.696 00:52:16 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:03.696 00:52:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1134800 ]] 00:05:03.696 00:52:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1134800 00:05:03.696 00:52:16 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1134800 ']' 00:05:03.696 00:52:16 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1134800 00:05:03.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1134800) - No such process 00:05:03.696 00:52:16 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 1134800 is not found' 00:05:03.696 Process with pid 1134800 is not found 00:05:03.696 00:52:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1134916 ]] 00:05:03.696 00:52:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1134916 00:05:03.696 00:52:16 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 1134916 ']' 00:05:03.696 00:52:16 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 1134916 00:05:03.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1134916) - No such process 00:05:03.696 00:52:16 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 1134916 is not found' 00:05:03.696 Process with pid 1134916 is not found 00:05:03.696 00:52:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:03.696 00:05:03.696 real 0m17.692s 00:05:03.696 user 0m31.028s 00:05:03.696 sys 0m5.404s 00:05:03.696 00:52:16 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:03.696 00:52:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.696 ************************************ 00:05:03.696 END TEST cpu_locks 00:05:03.696 ************************************ 00:05:03.954 00:05:03.954 real 0m43.607s 00:05:03.954 user 1m23.614s 00:05:03.954 sys 0m9.731s 00:05:03.954 00:52:16 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:03.954 00:52:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.954 ************************************ 00:05:03.954 END TEST event 00:05:03.954 ************************************ 00:05:03.954 00:52:16 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:03.954 00:52:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:03.954 00:52:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:03.954 00:52:16 -- common/autotest_common.sh@10 -- # set +x 00:05:03.954 ************************************ 00:05:03.954 START TEST thread 00:05:03.954 ************************************ 00:05:03.954 00:52:16 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:03.954 * Looking for test storage... 00:05:03.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:03.954 00:52:16 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:03.954 00:52:16 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:03.954 00:52:16 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:03.954 00:52:16 thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.954 ************************************ 00:05:03.954 START TEST thread_poller_perf 00:05:03.954 ************************************ 00:05:03.954 00:52:16 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:03.954 [2024-05-15 00:52:16.244505] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:03.954 [2024-05-15 00:52:16.244572] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1135302 ] 00:05:03.954 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.954 [2024-05-15 00:52:16.314022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.212 [2024-05-15 00:52:16.427638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.212 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:05.584 ====================================== 00:05:05.584 busy:2716821227 (cyc) 00:05:05.584 total_run_count: 295000 00:05:05.584 tsc_hz: 2700000000 (cyc) 00:05:05.584 ====================================== 00:05:05.584 poller_cost: 9209 (cyc), 3410 (nsec) 00:05:05.584 00:05:05.584 real 0m1.330s 00:05:05.584 user 0m1.238s 00:05:05.584 sys 0m0.086s 00:05:05.584 00:52:17 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:05.584 00:52:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:05.584 ************************************ 00:05:05.584 END TEST thread_poller_perf 00:05:05.584 ************************************ 00:05:05.584 00:52:17 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:05.584 00:52:17 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:05.584 00:52:17 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:05.584 00:52:17 thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.584 ************************************ 00:05:05.584 START TEST thread_poller_perf 00:05:05.584 ************************************ 00:05:05.584 00:52:17 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:05.584 [2024-05-15 00:52:17.627612] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:05.584 [2024-05-15 00:52:17.627677] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1135462 ] 00:05:05.584 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.584 [2024-05-15 00:52:17.703020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.584 [2024-05-15 00:52:17.818000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.584 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:06.988 ====================================== 00:05:06.988 busy:2702787210 (cyc) 00:05:06.988 total_run_count: 3850000 00:05:06.988 tsc_hz: 2700000000 (cyc) 00:05:06.988 ====================================== 00:05:06.988 poller_cost: 702 (cyc), 260 (nsec) 00:05:06.988 00:05:06.988 real 0m1.328s 00:05:06.988 user 0m1.226s 00:05:06.988 sys 0m0.096s 00:05:06.988 00:52:18 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:06.988 00:52:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:06.988 ************************************ 00:05:06.988 END TEST thread_poller_perf 00:05:06.988 ************************************ 00:05:06.988 00:52:18 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:06.988 00:05:06.988 real 0m2.815s 00:05:06.988 user 0m2.526s 00:05:06.988 sys 0m0.285s 00:05:06.988 00:52:18 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:06.988 00:52:18 thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.988 ************************************ 00:05:06.988 END TEST thread 00:05:06.988 ************************************ 00:05:06.988 00:52:18 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:06.988 00:52:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:06.988 00:52:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:06.988 00:52:18 -- common/autotest_common.sh@10 -- # set +x 00:05:06.988 ************************************ 00:05:06.988 START TEST accel 00:05:06.988 ************************************ 00:05:06.988 00:52:19 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:06.988 * Looking for test storage... 00:05:06.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:06.988 00:52:19 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:06.988 00:52:19 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:06.988 00:52:19 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:06.988 00:52:19 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1135659 00:05:06.988 00:52:19 accel -- accel/accel.sh@63 -- # waitforlisten 1135659 00:05:06.988 00:52:19 accel -- common/autotest_common.sh@827 -- # '[' -z 1135659 ']' 00:05:06.988 00:52:19 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.988 00:52:19 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:06.988 00:52:19 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:06.988 00:52:19 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:06.988 00:52:19 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:06.988 00:52:19 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.988 00:52:19 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:06.988 00:52:19 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:06.988 00:52:19 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:06.989 00:52:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:06.989 00:52:19 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:06.989 00:52:19 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:06.989 00:52:19 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:06.989 00:52:19 accel -- accel/accel.sh@41 -- # jq -r . 00:05:06.989 [2024-05-15 00:52:19.121497] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:06.989 [2024-05-15 00:52:19.121591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1135659 ] 00:05:06.989 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.989 [2024-05-15 00:52:19.192877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.989 [2024-05-15 00:52:19.304173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.923 00:52:20 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:07.923 00:52:20 accel -- common/autotest_common.sh@860 -- # return 0 00:05:07.923 00:52:20 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:07.923 00:52:20 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:07.923 00:52:20 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:07.923 00:52:20 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:07.923 00:52:20 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:07.923 00:52:20 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:07.923 00:52:20 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:07.923 00:52:20 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:07.923 00:52:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:07.923 00:52:20 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:07.923 00:52:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:07.923 00:52:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:07.923 00:52:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:07.923 00:52:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:07.923 00:52:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:07.923 00:52:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:07.923 00:52:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:07.923 00:52:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:07.923 00:52:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:07.923 00:52:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:07.923 00:52:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:07.923 00:52:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:07.923 00:52:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:07.923 00:52:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:07.923 00:52:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:07.923 00:52:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:07.923 00:52:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:07.923 00:52:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:07.923 00:52:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:07.923 00:52:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:07.923 00:52:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:07.923 00:52:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:07.923 00:52:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:07.923 00:52:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:07.923 00:52:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:07.923 00:52:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:07.923 00:52:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:07.923 00:52:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:07.923 00:52:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:07.923 00:52:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:07.923 00:52:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:07.923 00:52:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:07.923 00:52:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:07.923 00:52:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:07.923 00:52:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:07.923 00:52:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:07.923 00:52:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:07.923 00:52:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:07.923 00:52:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:07.923 00:52:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:07.923 00:52:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:07.923 00:52:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:07.923 00:52:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:07.923 00:52:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:07.923 00:52:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:07.923 00:52:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:07.923 00:52:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:07.923 00:52:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:07.923 00:52:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:07.923 00:52:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:07.923 00:52:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:07.923 00:52:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:07.923 00:52:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:07.923 00:52:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:07.923 00:52:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:07.923 00:52:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:07.923 00:52:20 accel -- accel/accel.sh@75 -- # killprocess 1135659 00:05:07.923 00:52:20 accel -- common/autotest_common.sh@946 -- # '[' -z 1135659 ']' 00:05:07.923 00:52:20 accel -- common/autotest_common.sh@950 -- # kill -0 1135659 00:05:07.923 00:52:20 accel -- common/autotest_common.sh@951 -- # uname 00:05:07.923 00:52:20 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:07.923 00:52:20 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1135659 00:05:07.923 00:52:20 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:07.923 00:52:20 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:07.923 00:52:20 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1135659' 00:05:07.923 killing process with pid 1135659 00:05:07.923 00:52:20 accel -- common/autotest_common.sh@965 -- # kill 1135659 00:05:07.923 00:52:20 accel -- common/autotest_common.sh@970 -- # wait 1135659 00:05:08.489 00:52:20 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:08.489 00:52:20 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:08.489 00:52:20 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:08.489 00:52:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:08.489 00:52:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:08.489 00:52:20 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:05:08.489 00:52:20 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:08.489 00:52:20 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:08.489 00:52:20 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:08.489 00:52:20 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:08.489 00:52:20 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:08.489 00:52:20 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:08.489 00:52:20 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:08.489 00:52:20 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:08.489 00:52:20 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:08.489 00:52:20 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:08.489 00:52:20 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:08.489 00:52:20 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:08.489 00:52:20 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:08.489 00:52:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:08.489 00:52:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:08.489 ************************************ 00:05:08.489 START TEST accel_missing_filename 00:05:08.489 ************************************ 00:05:08.489 00:52:20 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:05:08.489 00:52:20 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:08.489 00:52:20 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:08.489 00:52:20 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:08.489 00:52:20 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:08.489 00:52:20 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:08.489 00:52:20 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:08.489 00:52:20 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:08.489 00:52:20 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:08.489 00:52:20 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:08.489 00:52:20 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:08.489 00:52:20 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:08.489 00:52:20 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:08.490 00:52:20 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:08.490 00:52:20 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:08.490 00:52:20 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:08.490 00:52:20 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:08.490 [2024-05-15 00:52:20.688436] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:08.490 [2024-05-15 00:52:20.688494] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1135961 ] 00:05:08.490 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.490 [2024-05-15 00:52:20.760329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.490 [2024-05-15 00:52:20.878505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.747 [2024-05-15 00:52:20.940398] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:08.747 [2024-05-15 00:52:21.028876] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:09.006 A filename is required. 00:05:09.006 00:52:21 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:09.006 00:52:21 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:09.006 00:52:21 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:09.006 00:52:21 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:09.006 00:52:21 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:09.006 00:52:21 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:09.006 00:05:09.006 real 0m0.481s 00:05:09.006 user 0m0.366s 00:05:09.006 sys 0m0.146s 00:05:09.006 00:52:21 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:09.006 00:52:21 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:09.006 ************************************ 00:05:09.006 END TEST accel_missing_filename 00:05:09.006 ************************************ 00:05:09.006 00:52:21 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:09.006 00:52:21 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:05:09.006 00:52:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.006 00:52:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:09.006 ************************************ 00:05:09.006 START TEST accel_compress_verify 00:05:09.006 ************************************ 00:05:09.006 00:52:21 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:09.006 00:52:21 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:09.006 00:52:21 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:09.006 00:52:21 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:09.006 00:52:21 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.006 00:52:21 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:09.006 00:52:21 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.006 00:52:21 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:09.006 00:52:21 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:09.006 00:52:21 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:09.006 00:52:21 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:09.006 00:52:21 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:09.006 00:52:21 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:09.006 00:52:21 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:09.006 00:52:21 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:09.006 00:52:21 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:09.006 00:52:21 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:09.006 [2024-05-15 00:52:21.221388] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:09.006 [2024-05-15 00:52:21.221441] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1135982 ] 00:05:09.006 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.006 [2024-05-15 00:52:21.294502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.265 [2024-05-15 00:52:21.412640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.265 [2024-05-15 00:52:21.472943] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:09.265 [2024-05-15 00:52:21.557769] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:09.524 00:05:09.524 Compression does not support the verify option, aborting. 00:05:09.524 00:52:21 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:09.524 00:52:21 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:09.524 00:52:21 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:09.524 00:52:21 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:09.524 00:52:21 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:09.524 00:52:21 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:09.524 00:05:09.524 real 0m0.473s 00:05:09.524 user 0m0.355s 00:05:09.524 sys 0m0.148s 00:05:09.524 00:52:21 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:09.524 00:52:21 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:09.524 ************************************ 00:05:09.524 END TEST accel_compress_verify 00:05:09.524 ************************************ 00:05:09.524 00:52:21 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:09.524 00:52:21 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:09.524 00:52:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.524 00:52:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:09.524 ************************************ 00:05:09.524 START TEST accel_wrong_workload 00:05:09.524 ************************************ 00:05:09.524 00:52:21 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:05:09.524 00:52:21 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:09.524 00:52:21 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:09.524 00:52:21 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:09.524 00:52:21 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.524 00:52:21 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:09.524 00:52:21 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.524 00:52:21 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:09.524 00:52:21 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:09.524 00:52:21 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:09.524 00:52:21 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:09.524 00:52:21 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:09.524 00:52:21 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:09.524 00:52:21 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:09.524 00:52:21 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:09.524 00:52:21 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:09.524 00:52:21 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:09.524 Unsupported workload type: foobar 00:05:09.524 [2024-05-15 00:52:21.747303] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:09.524 accel_perf options: 00:05:09.524 [-h help message] 00:05:09.524 [-q queue depth per core] 00:05:09.524 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:09.524 [-T number of threads per core 00:05:09.524 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:09.524 [-t time in seconds] 00:05:09.524 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:09.524 [ dif_verify, , dif_generate, dif_generate_copy 00:05:09.524 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:09.524 [-l for compress/decompress workloads, name of uncompressed input file 00:05:09.524 [-S for crc32c workload, use this seed value (default 0) 00:05:09.524 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:09.524 [-f for fill workload, use this BYTE value (default 255) 00:05:09.524 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:09.524 [-y verify result if this switch is on] 00:05:09.524 [-a tasks to allocate per core (default: same value as -q)] 00:05:09.524 Can be used to spread operations across a wider range of memory. 00:05:09.524 00:52:21 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:09.524 00:52:21 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:09.524 00:52:21 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:09.524 00:52:21 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:09.524 00:05:09.524 real 0m0.022s 00:05:09.524 user 0m0.013s 00:05:09.524 sys 0m0.009s 00:05:09.524 00:52:21 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:09.524 00:52:21 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:09.524 ************************************ 00:05:09.524 END TEST accel_wrong_workload 00:05:09.524 ************************************ 00:05:09.524 Error: writing output failed: Broken pipe 00:05:09.524 00:52:21 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:09.524 00:52:21 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:05:09.524 00:52:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.524 00:52:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:09.525 ************************************ 00:05:09.525 START TEST accel_negative_buffers 00:05:09.525 ************************************ 00:05:09.525 00:52:21 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:09.525 00:52:21 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:09.525 00:52:21 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:09.525 00:52:21 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:09.525 00:52:21 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.525 00:52:21 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:09.525 00:52:21 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.525 00:52:21 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:09.525 00:52:21 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:09.525 00:52:21 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:09.525 00:52:21 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:09.525 00:52:21 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:09.525 00:52:21 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:09.525 00:52:21 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:09.525 00:52:21 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:09.525 00:52:21 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:09.525 00:52:21 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:09.525 -x option must be non-negative. 00:05:09.525 [2024-05-15 00:52:21.819081] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:09.525 accel_perf options: 00:05:09.525 [-h help message] 00:05:09.525 [-q queue depth per core] 00:05:09.525 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:09.525 [-T number of threads per core 00:05:09.525 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:09.525 [-t time in seconds] 00:05:09.525 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:09.525 [ dif_verify, , dif_generate, dif_generate_copy 00:05:09.525 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:09.525 [-l for compress/decompress workloads, name of uncompressed input file 00:05:09.525 [-S for crc32c workload, use this seed value (default 0) 00:05:09.525 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:09.525 [-f for fill workload, use this BYTE value (default 255) 00:05:09.525 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:09.525 [-y verify result if this switch is on] 00:05:09.525 [-a tasks to allocate per core (default: same value as -q)] 00:05:09.525 Can be used to spread operations across a wider range of memory. 00:05:09.525 00:52:21 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:09.525 00:52:21 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:09.525 00:52:21 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:09.525 00:52:21 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:09.525 00:05:09.525 real 0m0.022s 00:05:09.525 user 0m0.012s 00:05:09.525 sys 0m0.011s 00:05:09.525 00:52:21 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:09.525 00:52:21 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:09.525 ************************************ 00:05:09.525 END TEST accel_negative_buffers 00:05:09.525 ************************************ 00:05:09.525 Error: writing output failed: Broken pipe 00:05:09.525 00:52:21 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:09.525 00:52:21 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:09.525 00:52:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.525 00:52:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:09.525 ************************************ 00:05:09.525 START TEST accel_crc32c 00:05:09.525 ************************************ 00:05:09.525 00:52:21 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:09.525 00:52:21 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:09.525 00:52:21 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:09.525 00:52:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.525 00:52:21 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:09.525 00:52:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.525 00:52:21 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:09.525 00:52:21 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:09.525 00:52:21 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:09.525 00:52:21 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:09.525 00:52:21 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:09.525 00:52:21 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:09.525 00:52:21 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:09.525 00:52:21 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:09.525 00:52:21 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:09.525 [2024-05-15 00:52:21.889696] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:09.525 [2024-05-15 00:52:21.889761] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1136173 ] 00:05:09.784 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.784 [2024-05-15 00:52:21.965860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.784 [2024-05-15 00:52:22.084447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.784 00:52:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:09.784 00:52:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.784 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.784 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.784 00:52:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:09.784 00:52:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.784 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.784 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.784 00:52:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:09.784 00:52:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.784 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.784 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.784 00:52:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:09.784 00:52:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.784 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.784 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.784 00:52:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:09.784 00:52:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.784 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.784 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.784 00:52:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:09.784 00:52:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.784 00:52:22 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:09.784 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.784 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:09.785 00:52:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:11.160 00:52:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:11.160 00:52:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:11.160 00:52:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:11.160 00:52:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:11.161 00:52:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:11.161 00:52:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:11.161 00:52:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:11.161 00:52:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:11.161 00:52:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:11.161 00:52:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:11.161 00:52:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:11.161 00:52:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:11.161 00:52:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:11.161 00:52:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:11.161 00:52:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:11.161 00:52:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:11.161 00:52:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:11.161 00:52:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:11.161 00:52:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:11.161 00:52:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:11.161 00:52:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:11.161 00:52:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:11.161 00:52:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:11.161 00:52:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:11.161 00:52:23 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:11.161 00:52:23 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:11.161 00:52:23 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:11.161 00:05:11.161 real 0m1.492s 00:05:11.161 user 0m1.341s 00:05:11.161 sys 0m0.154s 00:05:11.161 00:52:23 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:11.161 00:52:23 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:11.161 ************************************ 00:05:11.161 END TEST accel_crc32c 00:05:11.161 ************************************ 00:05:11.161 00:52:23 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:11.161 00:52:23 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:11.161 00:52:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:11.161 00:52:23 accel -- common/autotest_common.sh@10 -- # set +x 00:05:11.161 ************************************ 00:05:11.161 START TEST accel_crc32c_C2 00:05:11.161 ************************************ 00:05:11.161 00:52:23 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:11.161 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:11.161 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:11.161 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.161 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:11.161 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.161 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:11.161 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:11.161 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:11.161 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:11.161 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:11.161 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:11.161 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:11.161 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:11.161 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:11.161 [2024-05-15 00:52:23.433205] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:11.161 [2024-05-15 00:52:23.433290] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1136326 ] 00:05:11.161 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.161 [2024-05-15 00:52:23.512113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.419 [2024-05-15 00:52:23.631501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.419 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:11.420 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:11.420 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:11.420 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:11.420 00:52:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:12.793 00:52:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:12.793 00:52:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.793 00:52:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:12.793 00:52:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:12.793 00:52:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:12.793 00:52:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.793 00:52:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:12.793 00:52:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:12.793 00:52:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:12.793 00:52:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.793 00:52:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:12.793 00:52:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:12.793 00:52:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:12.793 00:52:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.793 00:52:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:12.793 00:52:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:12.793 00:52:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:12.793 00:52:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.793 00:52:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:12.793 00:52:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:12.793 00:52:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:12.793 00:52:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.793 00:52:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:12.793 00:52:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:12.793 00:52:24 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:12.793 00:52:24 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:12.794 00:52:24 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:12.794 00:05:12.794 real 0m1.489s 00:05:12.794 user 0m1.338s 00:05:12.794 sys 0m0.153s 00:05:12.794 00:52:24 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:12.794 00:52:24 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:12.794 ************************************ 00:05:12.794 END TEST accel_crc32c_C2 00:05:12.794 ************************************ 00:05:12.794 00:52:24 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:12.794 00:52:24 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:12.794 00:52:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:12.794 00:52:24 accel -- common/autotest_common.sh@10 -- # set +x 00:05:12.794 ************************************ 00:05:12.794 START TEST accel_copy 00:05:12.794 ************************************ 00:05:12.794 00:52:24 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:05:12.794 00:52:24 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:12.794 00:52:24 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:12.794 00:52:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:12.794 00:52:24 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:12.794 00:52:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:12.794 00:52:24 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:12.794 00:52:24 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:12.794 00:52:24 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:12.794 00:52:24 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:12.794 00:52:24 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.794 00:52:24 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.794 00:52:24 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:12.794 00:52:24 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:12.794 00:52:24 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:12.794 [2024-05-15 00:52:24.977315] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:12.794 [2024-05-15 00:52:24.977389] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1136603 ] 00:05:12.794 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.794 [2024-05-15 00:52:25.053304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.794 [2024-05-15 00:52:25.170651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:13.053 00:52:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:14.429 00:52:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:14.429 00:52:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:14.429 00:52:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:14.429 00:52:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:14.429 00:52:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:14.429 00:52:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:14.429 00:52:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:14.429 00:52:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:14.430 00:52:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:14.430 00:52:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:14.430 00:52:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:14.430 00:52:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:14.430 00:52:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:14.430 00:52:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:14.430 00:52:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:14.430 00:52:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:14.430 00:52:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:14.430 00:52:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:14.430 00:52:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:14.430 00:52:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:14.430 00:52:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:14.430 00:52:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:14.430 00:52:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:14.430 00:52:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:14.430 00:52:26 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:14.430 00:52:26 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:14.430 00:52:26 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:14.430 00:05:14.430 real 0m1.475s 00:05:14.430 user 0m1.335s 00:05:14.430 sys 0m0.142s 00:05:14.430 00:52:26 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:14.430 00:52:26 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:14.430 ************************************ 00:05:14.430 END TEST accel_copy 00:05:14.430 ************************************ 00:05:14.430 00:52:26 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:14.430 00:52:26 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:14.430 00:52:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:14.430 00:52:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:14.430 ************************************ 00:05:14.430 START TEST accel_fill 00:05:14.430 ************************************ 00:05:14.430 00:52:26 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:14.430 [2024-05-15 00:52:26.501332] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:14.430 [2024-05-15 00:52:26.501386] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1136761 ] 00:05:14.430 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.430 [2024-05-15 00:52:26.578413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.430 [2024-05-15 00:52:26.701982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:14.430 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:14.431 00:52:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:15.806 00:52:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:15.806 00:52:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:15.806 00:52:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:15.806 00:52:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:15.806 00:52:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:15.806 00:52:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:15.806 00:52:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:15.806 00:52:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:15.806 00:52:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:15.806 00:52:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:15.806 00:52:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:15.806 00:52:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:15.806 00:52:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:15.806 00:52:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:15.806 00:52:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:15.806 00:52:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:15.806 00:52:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:15.806 00:52:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:15.806 00:52:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:15.806 00:52:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:15.806 00:52:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:15.806 00:52:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:15.806 00:52:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:15.806 00:52:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:15.806 00:52:27 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:15.806 00:52:27 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:15.806 00:52:27 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:15.806 00:05:15.806 real 0m1.501s 00:05:15.806 user 0m1.345s 00:05:15.806 sys 0m0.158s 00:05:15.806 00:52:27 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.806 00:52:27 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:15.806 ************************************ 00:05:15.806 END TEST accel_fill 00:05:15.806 ************************************ 00:05:15.806 00:52:28 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:15.807 00:52:28 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:15.807 00:52:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:15.807 00:52:28 accel -- common/autotest_common.sh@10 -- # set +x 00:05:15.807 ************************************ 00:05:15.807 START TEST accel_copy_crc32c 00:05:15.807 ************************************ 00:05:15.807 00:52:28 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:05:15.807 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:15.807 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:15.807 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:15.807 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:15.807 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:15.807 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:15.807 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:15.807 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:15.807 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:15.807 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.807 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.807 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:15.807 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:15.807 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:15.807 [2024-05-15 00:52:28.060343] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:15.807 [2024-05-15 00:52:28.060409] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1136918 ] 00:05:15.807 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.807 [2024-05-15 00:52:28.133585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.066 [2024-05-15 00:52:28.257040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.066 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:16.066 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:16.066 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:16.066 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:16.066 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:16.066 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:16.066 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:16.066 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:16.066 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:16.066 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:16.066 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:16.066 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:16.066 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:16.066 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:16.066 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:16.066 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:16.066 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:16.066 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:16.066 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:16.066 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:16.066 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:16.067 00:52:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.444 00:52:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:17.444 00:52:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.444 00:52:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.444 00:52:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.444 00:52:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:17.444 00:52:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.444 00:52:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.444 00:52:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.444 00:52:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:17.444 00:52:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.444 00:52:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.444 00:52:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.444 00:52:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:17.444 00:52:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.444 00:52:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.444 00:52:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.444 00:52:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:17.444 00:52:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.444 00:52:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.444 00:52:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.444 00:52:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:17.444 00:52:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:17.444 00:52:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:17.444 00:52:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:17.444 00:52:29 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:17.444 00:52:29 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:17.444 00:52:29 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:17.444 00:05:17.444 real 0m1.488s 00:05:17.444 user 0m1.347s 00:05:17.444 sys 0m0.143s 00:05:17.444 00:52:29 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.444 00:52:29 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:17.444 ************************************ 00:05:17.444 END TEST accel_copy_crc32c 00:05:17.444 ************************************ 00:05:17.444 00:52:29 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:17.444 00:52:29 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:17.444 00:52:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:17.444 00:52:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:17.444 ************************************ 00:05:17.444 START TEST accel_copy_crc32c_C2 00:05:17.444 ************************************ 00:05:17.444 00:52:29 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:17.444 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:17.444 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:17.444 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.444 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:17.444 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.444 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:17.444 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:17.444 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:17.444 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:17.444 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.444 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.444 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:17.444 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:17.444 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:17.444 [2024-05-15 00:52:29.603013] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:17.444 [2024-05-15 00:52:29.603069] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1137198 ] 00:05:17.444 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.444 [2024-05-15 00:52:29.677014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.444 [2024-05-15 00:52:29.806247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.703 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:17.703 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.703 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.703 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.703 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:17.703 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.703 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.703 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.703 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:17.703 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.703 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.703 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.703 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:17.703 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.703 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.703 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.703 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:17.703 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.703 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.703 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.703 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:17.703 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.703 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:17.703 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.703 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.703 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:17.703 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.703 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:17.704 00:52:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.078 00:52:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:19.078 00:52:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.078 00:52:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.078 00:52:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.078 00:52:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:19.078 00:52:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.078 00:52:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.078 00:52:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.078 00:52:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:19.078 00:52:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.078 00:52:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.078 00:52:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.078 00:52:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:19.078 00:52:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.078 00:52:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.078 00:52:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.078 00:52:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:19.078 00:52:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.078 00:52:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.078 00:52:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.078 00:52:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:19.078 00:52:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.078 00:52:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:19.078 00:52:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:19.078 00:52:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:19.078 00:52:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:19.078 00:52:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:19.078 00:05:19.078 real 0m1.499s 00:05:19.078 user 0m1.344s 00:05:19.078 sys 0m0.156s 00:05:19.078 00:52:31 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:19.078 00:52:31 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:19.078 ************************************ 00:05:19.078 END TEST accel_copy_crc32c_C2 00:05:19.078 ************************************ 00:05:19.078 00:52:31 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:19.078 00:52:31 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:19.078 00:52:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.078 00:52:31 accel -- common/autotest_common.sh@10 -- # set +x 00:05:19.078 ************************************ 00:05:19.078 START TEST accel_dualcast 00:05:19.078 ************************************ 00:05:19.078 00:52:31 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:19.078 [2024-05-15 00:52:31.152026] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:19.078 [2024-05-15 00:52:31.152093] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1137354 ] 00:05:19.078 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.078 [2024-05-15 00:52:31.224845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.078 [2024-05-15 00:52:31.348012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:19.078 00:52:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:20.452 00:52:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:20.452 00:52:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:20.452 00:52:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:20.452 00:52:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:20.452 00:52:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:20.452 00:52:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:20.452 00:52:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:20.452 00:52:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:20.452 00:52:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:20.452 00:52:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:20.452 00:52:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:20.452 00:52:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:20.452 00:52:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:20.452 00:52:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:20.452 00:52:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:20.452 00:52:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:20.452 00:52:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:20.452 00:52:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:20.452 00:52:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:20.452 00:52:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:20.452 00:52:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:20.452 00:52:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:20.452 00:52:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:20.452 00:52:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:20.452 00:52:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:20.452 00:52:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:20.452 00:52:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:20.452 00:05:20.452 real 0m1.498s 00:05:20.452 user 0m1.344s 00:05:20.452 sys 0m0.156s 00:05:20.452 00:52:32 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:20.452 00:52:32 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:20.452 ************************************ 00:05:20.452 END TEST accel_dualcast 00:05:20.452 ************************************ 00:05:20.452 00:52:32 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:20.452 00:52:32 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:20.452 00:52:32 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:20.452 00:52:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:20.452 ************************************ 00:05:20.452 START TEST accel_compare 00:05:20.452 ************************************ 00:05:20.452 00:52:32 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:05:20.452 00:52:32 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:20.452 00:52:32 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:20.452 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.452 00:52:32 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:20.452 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.452 00:52:32 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:20.452 00:52:32 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:20.452 00:52:32 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.452 00:52:32 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.452 00:52:32 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.452 00:52:32 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.452 00:52:32 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.452 00:52:32 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:20.452 00:52:32 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:20.452 [2024-05-15 00:52:32.702884] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:20.452 [2024-05-15 00:52:32.702958] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1137526 ] 00:05:20.452 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.452 [2024-05-15 00:52:32.781341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.711 [2024-05-15 00:52:32.902099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:20.711 00:52:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:22.083 00:52:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:22.083 00:52:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:22.083 00:52:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:22.083 00:52:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:22.083 00:52:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:22.083 00:52:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:22.083 00:52:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:22.083 00:52:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:22.083 00:52:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:22.083 00:52:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:22.083 00:52:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:22.083 00:52:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:22.083 00:52:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:22.083 00:52:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:22.083 00:52:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:22.083 00:52:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:22.083 00:52:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:22.083 00:52:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:22.083 00:52:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:22.083 00:52:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:22.083 00:52:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:22.083 00:52:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:22.083 00:52:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:22.083 00:52:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:22.083 00:52:34 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:22.083 00:52:34 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:22.083 00:52:34 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:22.083 00:05:22.083 real 0m1.491s 00:05:22.083 user 0m1.337s 00:05:22.083 sys 0m0.155s 00:05:22.083 00:52:34 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:22.083 00:52:34 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:22.083 ************************************ 00:05:22.083 END TEST accel_compare 00:05:22.083 ************************************ 00:05:22.083 00:52:34 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:22.083 00:52:34 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:22.083 00:52:34 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:22.083 00:52:34 accel -- common/autotest_common.sh@10 -- # set +x 00:05:22.083 ************************************ 00:05:22.083 START TEST accel_xor 00:05:22.083 ************************************ 00:05:22.083 00:52:34 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:05:22.083 00:52:34 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:22.083 00:52:34 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:22.083 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.083 00:52:34 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:22.083 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.083 00:52:34 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:22.083 00:52:34 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:22.083 00:52:34 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.083 00:52:34 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.083 00:52:34 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.083 00:52:34 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.083 00:52:34 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.083 00:52:34 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:22.083 00:52:34 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:22.083 [2024-05-15 00:52:34.250389] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:22.083 [2024-05-15 00:52:34.250457] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1137784 ] 00:05:22.083 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.083 [2024-05-15 00:52:34.322907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.083 [2024-05-15 00:52:34.442281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:22.346 00:52:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.759 00:52:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.759 00:52:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.759 00:52:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.759 00:52:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.759 00:52:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.759 00:52:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.759 00:52:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.759 00:52:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.759 00:52:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.759 00:52:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.759 00:52:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.759 00:52:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.759 00:52:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.759 00:52:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.760 00:52:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.760 00:52:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.760 00:52:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.760 00:52:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.760 00:52:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.760 00:52:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.760 00:52:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.760 00:52:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.760 00:52:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.760 00:52:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.760 00:52:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:23.760 00:52:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:23.760 00:52:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:23.760 00:05:23.760 real 0m1.488s 00:05:23.760 user 0m1.337s 00:05:23.760 sys 0m0.153s 00:05:23.760 00:52:35 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.760 00:52:35 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:23.760 ************************************ 00:05:23.760 END TEST accel_xor 00:05:23.760 ************************************ 00:05:23.760 00:52:35 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:23.760 00:52:35 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:23.760 00:52:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.760 00:52:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.760 ************************************ 00:05:23.760 START TEST accel_xor 00:05:23.760 ************************************ 00:05:23.760 00:52:35 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:05:23.760 00:52:35 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:23.760 00:52:35 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:23.760 00:52:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.760 00:52:35 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:23.760 00:52:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.760 00:52:35 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:23.760 00:52:35 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:23.760 00:52:35 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.760 00:52:35 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.760 00:52:35 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.760 00:52:35 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.760 00:52:35 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.760 00:52:35 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:23.760 00:52:35 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:23.760 [2024-05-15 00:52:35.789422] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:23.760 [2024-05-15 00:52:35.789488] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1137951 ] 00:05:23.760 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.760 [2024-05-15 00:52:35.867817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.760 [2024-05-15 00:52:35.990982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:23.760 00:52:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.133 00:52:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:25.133 00:52:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.133 00:52:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.133 00:52:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.133 00:52:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:25.133 00:52:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.133 00:52:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.133 00:52:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.133 00:52:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:25.133 00:52:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.133 00:52:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.133 00:52:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.133 00:52:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:25.133 00:52:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.133 00:52:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.133 00:52:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.133 00:52:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:25.133 00:52:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.133 00:52:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.133 00:52:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.133 00:52:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:25.133 00:52:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:25.133 00:52:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:25.133 00:52:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:25.133 00:52:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:25.133 00:52:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:25.133 00:52:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:25.133 00:05:25.133 real 0m1.503s 00:05:25.133 user 0m1.354s 00:05:25.133 sys 0m0.151s 00:05:25.133 00:52:37 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.133 00:52:37 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:25.133 ************************************ 00:05:25.133 END TEST accel_xor 00:05:25.133 ************************************ 00:05:25.133 00:52:37 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:25.133 00:52:37 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:25.133 00:52:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.133 00:52:37 accel -- common/autotest_common.sh@10 -- # set +x 00:05:25.133 ************************************ 00:05:25.133 START TEST accel_dif_verify 00:05:25.133 ************************************ 00:05:25.133 00:52:37 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:05:25.133 00:52:37 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:25.133 00:52:37 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:25.133 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.133 00:52:37 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:25.133 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.133 00:52:37 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:25.133 00:52:37 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:25.133 00:52:37 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:25.134 00:52:37 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:25.134 00:52:37 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.134 00:52:37 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.134 00:52:37 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:25.134 00:52:37 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:25.134 00:52:37 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:25.134 [2024-05-15 00:52:37.345331] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:25.134 [2024-05-15 00:52:37.345397] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1138182 ] 00:05:25.134 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.134 [2024-05-15 00:52:37.425263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.392 [2024-05-15 00:52:37.548730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.392 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.393 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.393 00:52:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:25.393 00:52:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.393 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.393 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:25.393 00:52:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:25.393 00:52:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:25.393 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:25.393 00:52:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.767 00:52:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:26.767 00:52:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:26.767 00:52:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.767 00:52:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.767 00:52:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:26.767 00:52:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:26.767 00:52:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.767 00:52:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.767 00:52:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:26.767 00:52:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:26.767 00:52:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.767 00:52:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.767 00:52:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:26.767 00:52:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:26.767 00:52:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.767 00:52:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.767 00:52:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:26.767 00:52:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:26.767 00:52:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.767 00:52:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.767 00:52:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:26.767 00:52:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:26.767 00:52:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:26.767 00:52:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:26.767 00:52:38 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:26.767 00:52:38 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:26.767 00:52:38 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:26.767 00:05:26.767 real 0m1.500s 00:05:26.767 user 0m1.343s 00:05:26.767 sys 0m0.160s 00:05:26.767 00:52:38 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.767 00:52:38 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:26.767 ************************************ 00:05:26.767 END TEST accel_dif_verify 00:05:26.767 ************************************ 00:05:26.767 00:52:38 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:26.767 00:52:38 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:26.767 00:52:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.767 00:52:38 accel -- common/autotest_common.sh@10 -- # set +x 00:05:26.767 ************************************ 00:05:26.767 START TEST accel_dif_generate 00:05:26.767 ************************************ 00:05:26.767 00:52:38 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:05:26.767 00:52:38 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:26.767 00:52:38 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:26.767 00:52:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:26.767 00:52:38 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:26.767 00:52:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:26.767 00:52:38 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:26.767 00:52:38 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:26.767 00:52:38 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:26.767 00:52:38 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:26.767 00:52:38 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.767 00:52:38 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.767 00:52:38 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:26.767 00:52:38 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:26.767 00:52:38 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:26.767 [2024-05-15 00:52:38.896727] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:26.767 [2024-05-15 00:52:38.896791] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1138386 ] 00:05:26.767 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.767 [2024-05-15 00:52:38.974354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.767 [2024-05-15 00:52:39.097572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:27.025 00:52:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.026 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.026 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.026 00:52:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:27.026 00:52:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.026 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.026 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.026 00:52:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:27.026 00:52:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.026 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.026 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:27.026 00:52:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:27.026 00:52:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:27.026 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:27.026 00:52:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:28.401 00:52:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:28.401 00:52:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:28.401 00:52:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:28.401 00:52:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:28.401 00:52:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:28.401 00:52:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:28.401 00:52:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:28.401 00:52:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:28.401 00:52:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:28.401 00:52:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:28.401 00:52:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:28.401 00:52:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:28.401 00:52:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:28.401 00:52:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:28.401 00:52:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:28.401 00:52:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:28.401 00:52:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:28.401 00:52:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:28.401 00:52:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:28.401 00:52:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:28.401 00:52:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:28.401 00:52:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:28.401 00:52:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:28.401 00:52:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:28.401 00:52:40 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:28.401 00:52:40 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:28.401 00:52:40 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:28.401 00:05:28.401 real 0m1.504s 00:05:28.401 user 0m1.354s 00:05:28.401 sys 0m0.153s 00:05:28.401 00:52:40 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:28.401 00:52:40 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:28.401 ************************************ 00:05:28.401 END TEST accel_dif_generate 00:05:28.401 ************************************ 00:05:28.401 00:52:40 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:28.401 00:52:40 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:28.401 00:52:40 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:28.401 00:52:40 accel -- common/autotest_common.sh@10 -- # set +x 00:05:28.401 ************************************ 00:05:28.401 START TEST accel_dif_generate_copy 00:05:28.401 ************************************ 00:05:28.401 00:52:40 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:05:28.401 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:28.401 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:28.401 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.401 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:28.401 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.401 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:28.401 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:28.401 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:28.401 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:28.401 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.401 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.401 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:28.401 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:28.401 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:28.401 [2024-05-15 00:52:40.456318] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:28.401 [2024-05-15 00:52:40.456384] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1138538 ] 00:05:28.401 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.401 [2024-05-15 00:52:40.530254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.401 [2024-05-15 00:52:40.653431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.401 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:28.401 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.401 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.401 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.401 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:28.401 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.401 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.401 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.401 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:28.401 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.401 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.402 00:52:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.778 00:52:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:29.778 00:52:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.778 00:52:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.778 00:52:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.778 00:52:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:29.778 00:52:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.778 00:52:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.778 00:52:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.778 00:52:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:29.778 00:52:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.778 00:52:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.778 00:52:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.778 00:52:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:29.778 00:52:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.778 00:52:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.778 00:52:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.778 00:52:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:29.778 00:52:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.778 00:52:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.778 00:52:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.778 00:52:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:29.778 00:52:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:29.778 00:52:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:29.778 00:52:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:29.778 00:52:41 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:29.778 00:52:41 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:29.778 00:52:41 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:29.778 00:05:29.778 real 0m1.494s 00:05:29.778 user 0m1.350s 00:05:29.778 sys 0m0.146s 00:05:29.778 00:52:41 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.778 00:52:41 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:29.778 ************************************ 00:05:29.778 END TEST accel_dif_generate_copy 00:05:29.778 ************************************ 00:05:29.778 00:52:41 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:29.778 00:52:41 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:29.778 00:52:41 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:29.778 00:52:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.778 00:52:41 accel -- common/autotest_common.sh@10 -- # set +x 00:05:29.778 ************************************ 00:05:29.778 START TEST accel_comp 00:05:29.778 ************************************ 00:05:29.778 00:52:41 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:29.778 00:52:41 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:29.778 00:52:41 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:29.778 00:52:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:29.778 00:52:41 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:29.778 00:52:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:29.778 00:52:41 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:29.778 00:52:41 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:29.778 00:52:41 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:29.778 00:52:41 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:29.778 00:52:41 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.778 00:52:41 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.778 00:52:41 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:29.778 00:52:41 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:29.778 00:52:41 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:29.778 [2024-05-15 00:52:42.004760] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:29.779 [2024-05-15 00:52:42.004826] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1138814 ] 00:05:29.779 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.779 [2024-05-15 00:52:42.078602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.036 [2024-05-15 00:52:42.202644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.036 00:52:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:30.036 00:52:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.036 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.036 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.036 00:52:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:30.036 00:52:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.036 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.036 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.036 00:52:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:30.037 00:52:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:31.408 00:52:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:31.408 00:52:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.408 00:52:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:31.408 00:52:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:31.408 00:52:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:31.408 00:52:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.408 00:52:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:31.408 00:52:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:31.408 00:52:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:31.408 00:52:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.408 00:52:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:31.408 00:52:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:31.408 00:52:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:31.408 00:52:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.408 00:52:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:31.408 00:52:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:31.408 00:52:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:31.408 00:52:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.408 00:52:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:31.408 00:52:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:31.408 00:52:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:31.408 00:52:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.408 00:52:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:31.408 00:52:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:31.408 00:52:43 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:31.408 00:52:43 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:31.408 00:52:43 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:31.408 00:05:31.408 real 0m1.508s 00:05:31.408 user 0m1.362s 00:05:31.408 sys 0m0.149s 00:05:31.408 00:52:43 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.408 00:52:43 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:31.408 ************************************ 00:05:31.408 END TEST accel_comp 00:05:31.408 ************************************ 00:05:31.408 00:52:43 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:31.408 00:52:43 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:31.408 00:52:43 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.408 00:52:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.408 ************************************ 00:05:31.408 START TEST accel_decomp 00:05:31.408 ************************************ 00:05:31.408 00:52:43 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:31.408 00:52:43 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:31.408 00:52:43 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:31.408 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:31.408 00:52:43 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:31.408 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:31.408 00:52:43 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:31.408 00:52:43 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:31.408 00:52:43 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.408 00:52:43 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.408 00:52:43 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.408 00:52:43 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.408 00:52:43 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.408 00:52:43 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:31.408 00:52:43 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:31.408 [2024-05-15 00:52:43.564340] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:31.408 [2024-05-15 00:52:43.564407] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1138972 ] 00:05:31.408 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.409 [2024-05-15 00:52:43.636712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.409 [2024-05-15 00:52:43.759832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:31.667 00:52:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.040 00:52:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:33.040 00:52:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.040 00:52:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:33.040 00:52:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.040 00:52:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:33.040 00:52:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.040 00:52:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:33.040 00:52:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.040 00:52:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:33.040 00:52:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.040 00:52:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:33.040 00:52:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.040 00:52:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:33.040 00:52:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.040 00:52:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:33.040 00:52:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.040 00:52:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:33.040 00:52:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.040 00:52:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:33.040 00:52:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.040 00:52:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:33.040 00:52:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:33.040 00:52:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:33.040 00:52:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:33.040 00:52:45 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:33.040 00:52:45 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:33.040 00:52:45 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:33.040 00:05:33.040 real 0m1.503s 00:05:33.040 user 0m1.345s 00:05:33.040 sys 0m0.161s 00:05:33.040 00:52:45 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.040 00:52:45 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:33.040 ************************************ 00:05:33.040 END TEST accel_decomp 00:05:33.040 ************************************ 00:05:33.040 00:52:45 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:33.040 00:52:45 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:33.040 00:52:45 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.040 00:52:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.040 ************************************ 00:05:33.040 START TEST accel_decmop_full 00:05:33.040 ************************************ 00:05:33.040 00:52:45 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:33.040 00:52:45 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:05:33.040 00:52:45 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:05:33.040 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.040 00:52:45 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:33.040 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.040 00:52:45 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:33.040 00:52:45 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:05:33.040 00:52:45 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.040 00:52:45 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.040 00:52:45 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.040 00:52:45 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.040 00:52:45 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.040 00:52:45 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:05:33.040 00:52:45 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:05:33.040 [2024-05-15 00:52:45.119969] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:33.040 [2024-05-15 00:52:45.120047] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1139132 ] 00:05:33.040 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.040 [2024-05-15 00:52:45.193716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.040 [2024-05-15 00:52:45.315981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.040 00:52:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:33.040 00:52:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.040 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.040 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:33.041 00:52:45 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:34.414 00:52:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:34.414 00:52:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:34.414 00:52:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:34.414 00:52:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:34.414 00:52:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:34.414 00:52:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:34.414 00:52:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:34.414 00:52:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:34.414 00:52:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:34.414 00:52:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:34.414 00:52:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:34.414 00:52:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:34.414 00:52:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:34.414 00:52:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:34.414 00:52:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:34.414 00:52:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:34.414 00:52:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:34.414 00:52:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:34.414 00:52:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:34.414 00:52:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:34.414 00:52:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:34.414 00:52:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:34.414 00:52:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:34.414 00:52:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:34.414 00:52:46 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:34.414 00:52:46 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:34.414 00:52:46 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.414 00:05:34.414 real 0m1.502s 00:05:34.414 user 0m1.352s 00:05:34.414 sys 0m0.153s 00:05:34.414 00:52:46 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.414 00:52:46 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:05:34.414 ************************************ 00:05:34.414 END TEST accel_decmop_full 00:05:34.414 ************************************ 00:05:34.414 00:52:46 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:34.414 00:52:46 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:34.414 00:52:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.414 00:52:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.414 ************************************ 00:05:34.414 START TEST accel_decomp_mcore 00:05:34.414 ************************************ 00:05:34.414 00:52:46 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:34.414 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:34.414 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:34.414 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.414 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:34.414 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.414 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:34.414 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:34.414 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.414 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.414 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.415 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.415 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.415 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:34.415 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:34.415 [2024-05-15 00:52:46.673317] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:34.415 [2024-05-15 00:52:46.673382] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1139402 ] 00:05:34.415 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.415 [2024-05-15 00:52:46.745594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:34.673 [2024-05-15 00:52:46.872285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.673 [2024-05-15 00:52:46.872338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.673 [2024-05-15 00:52:46.872392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:34.673 [2024-05-15 00:52:46.872396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:34.673 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:34.674 00:52:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.045 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.046 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:36.046 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:36.046 00:52:48 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.046 00:05:36.046 real 0m1.512s 00:05:36.046 user 0m4.820s 00:05:36.046 sys 0m0.171s 00:05:36.046 00:52:48 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.046 00:52:48 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:36.046 ************************************ 00:05:36.046 END TEST accel_decomp_mcore 00:05:36.046 ************************************ 00:05:36.046 00:52:48 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:36.046 00:52:48 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:36.046 00:52:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.046 00:52:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.046 ************************************ 00:05:36.046 START TEST accel_decomp_full_mcore 00:05:36.046 ************************************ 00:05:36.046 00:52:48 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:36.046 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:36.046 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:36.046 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.046 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:36.046 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.046 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:36.046 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:36.046 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.046 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.046 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.046 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.046 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.046 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:36.046 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:36.046 [2024-05-15 00:52:48.230471] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:36.046 [2024-05-15 00:52:48.230528] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1139569 ] 00:05:36.046 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.046 [2024-05-15 00:52:48.302747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:36.046 [2024-05-15 00:52:48.428642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.046 [2024-05-15 00:52:48.428696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.046 [2024-05-15 00:52:48.428750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:36.046 [2024-05-15 00:52:48.428754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.304 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:36.304 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.304 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.304 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.304 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:36.304 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.304 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.304 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.304 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:36.304 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.304 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.304 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.304 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:36.304 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.304 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.304 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.304 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:36.305 00:52:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.679 00:05:37.679 real 0m1.517s 00:05:37.679 user 0m4.842s 00:05:37.679 sys 0m0.177s 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.679 00:52:49 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:37.679 ************************************ 00:05:37.679 END TEST accel_decomp_full_mcore 00:05:37.679 ************************************ 00:05:37.679 00:52:49 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:37.679 00:52:49 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:37.679 00:52:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.679 00:52:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.679 ************************************ 00:05:37.679 START TEST accel_decomp_mthread 00:05:37.679 ************************************ 00:05:37.680 00:52:49 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:37.680 00:52:49 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:37.680 00:52:49 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:37.680 00:52:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.680 00:52:49 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:37.680 00:52:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.680 00:52:49 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:37.680 00:52:49 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:37.680 00:52:49 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.680 00:52:49 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.680 00:52:49 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.680 00:52:49 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.680 00:52:49 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.680 00:52:49 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:37.680 00:52:49 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:37.680 [2024-05-15 00:52:49.804467] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:37.680 [2024-05-15 00:52:49.804534] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1139728 ] 00:05:37.680 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.680 [2024-05-15 00:52:49.880558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.680 [2024-05-15 00:52:50.002853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.680 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:37.680 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.680 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.680 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.680 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:37.680 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.680 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.680 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.680 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:37.680 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.680 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.680 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.680 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:37.680 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.680 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.680 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.680 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:37.680 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.680 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.680 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.680 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:37.680 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.680 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.680 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.680 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:37.680 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.680 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:37.680 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.680 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:37.937 00:52:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.899 00:52:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:38.899 00:52:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.899 00:52:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.899 00:52:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.899 00:52:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:38.899 00:52:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.899 00:52:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.899 00:52:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.899 00:52:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:38.899 00:52:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:38.899 00:52:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:38.899 00:52:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:38.899 00:52:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:39.158 00:52:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:39.158 00:52:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:39.158 00:52:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:39.158 00:52:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:39.158 00:52:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:39.158 00:52:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:39.158 00:52:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:39.158 00:52:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:39.158 00:52:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:39.158 00:52:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:39.158 00:52:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:39.158 00:52:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:39.158 00:52:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:39.158 00:52:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:39.158 00:52:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:39.158 00:52:51 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:39.158 00:52:51 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:39.158 00:52:51 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:39.158 00:05:39.158 real 0m1.507s 00:05:39.158 user 0m1.344s 00:05:39.158 sys 0m0.165s 00:05:39.158 00:52:51 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.158 00:52:51 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:39.158 ************************************ 00:05:39.158 END TEST accel_decomp_mthread 00:05:39.158 ************************************ 00:05:39.158 00:52:51 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:39.158 00:52:51 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:39.158 00:52:51 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.158 00:52:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.158 ************************************ 00:05:39.158 START TEST accel_decomp_full_mthread 00:05:39.158 ************************************ 00:05:39.158 00:52:51 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:39.158 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:39.158 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:39.158 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:39.158 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:39.158 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:39.158 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:39.158 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:39.158 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.158 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.158 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.158 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.158 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.158 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:39.158 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:39.158 [2024-05-15 00:52:51.364862] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:39.158 [2024-05-15 00:52:51.364938] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1140006 ] 00:05:39.158 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.158 [2024-05-15 00:52:51.437903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.416 [2024-05-15 00:52:51.561687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:39.416 00:52:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.787 00:52:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:40.787 00:52:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.787 00:52:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.787 00:52:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.787 00:52:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:40.787 00:52:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.787 00:52:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.787 00:52:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.787 00:52:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:40.787 00:52:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.787 00:52:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.787 00:52:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.788 00:52:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:40.788 00:52:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.788 00:52:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.788 00:52:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.788 00:52:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:40.788 00:52:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.788 00:52:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.788 00:52:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.788 00:52:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:40.788 00:52:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.788 00:52:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.788 00:52:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.788 00:52:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:40.788 00:52:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:40.788 00:52:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:40.788 00:52:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:40.788 00:52:52 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:40.788 00:52:52 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:40.788 00:52:52 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.788 00:05:40.788 real 0m1.532s 00:05:40.788 user 0m1.378s 00:05:40.788 sys 0m0.156s 00:05:40.788 00:52:52 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.788 00:52:52 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:40.788 ************************************ 00:05:40.788 END TEST accel_decomp_full_mthread 00:05:40.788 ************************************ 00:05:40.788 00:52:52 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:40.788 00:52:52 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:40.788 00:52:52 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:40.788 00:52:52 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:40.788 00:52:52 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.788 00:52:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.788 00:52:52 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.788 00:52:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.788 00:52:52 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.788 00:52:52 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.788 00:52:52 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.788 00:52:52 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:40.788 00:52:52 accel -- accel/accel.sh@41 -- # jq -r . 00:05:40.788 ************************************ 00:05:40.788 START TEST accel_dif_functional_tests 00:05:40.788 ************************************ 00:05:40.788 00:52:52 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:40.788 [2024-05-15 00:52:52.965347] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:40.788 [2024-05-15 00:52:52.965406] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1140170 ] 00:05:40.788 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.788 [2024-05-15 00:52:53.037219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:40.788 [2024-05-15 00:52:53.162426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.788 [2024-05-15 00:52:53.162482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.788 [2024-05-15 00:52:53.162486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.045 00:05:41.045 00:05:41.045 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.045 http://cunit.sourceforge.net/ 00:05:41.045 00:05:41.045 00:05:41.045 Suite: accel_dif 00:05:41.045 Test: verify: DIF generated, GUARD check ...passed 00:05:41.045 Test: verify: DIF generated, APPTAG check ...passed 00:05:41.045 Test: verify: DIF generated, REFTAG check ...passed 00:05:41.045 Test: verify: DIF not generated, GUARD check ...[2024-05-15 00:52:53.261823] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:41.045 [2024-05-15 00:52:53.261888] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:41.045 passed 00:05:41.045 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 00:52:53.261944] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:41.045 [2024-05-15 00:52:53.261979] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:41.045 passed 00:05:41.045 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 00:52:53.262016] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:41.045 [2024-05-15 00:52:53.262046] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:41.045 passed 00:05:41.045 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:41.045 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 00:52:53.262116] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:41.045 passed 00:05:41.045 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:41.045 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:41.045 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:41.045 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 00:52:53.262268] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:41.045 passed 00:05:41.045 Test: generate copy: DIF generated, GUARD check ...passed 00:05:41.045 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:41.045 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:41.045 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:41.045 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:41.045 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:41.045 Test: generate copy: iovecs-len validate ...[2024-05-15 00:52:53.262525] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:41.045 passed 00:05:41.045 Test: generate copy: buffer alignment validate ...passed 00:05:41.045 00:05:41.045 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.045 suites 1 1 n/a 0 0 00:05:41.045 tests 20 20 20 0 0 00:05:41.045 asserts 204 204 204 0 n/a 00:05:41.045 00:05:41.045 Elapsed time = 0.003 seconds 00:05:41.302 00:05:41.302 real 0m0.598s 00:05:41.302 user 0m0.888s 00:05:41.302 sys 0m0.190s 00:05:41.302 00:52:53 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.302 00:52:53 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:41.302 ************************************ 00:05:41.302 END TEST accel_dif_functional_tests 00:05:41.302 ************************************ 00:05:41.302 00:05:41.302 real 0m34.529s 00:05:41.302 user 0m37.841s 00:05:41.302 sys 0m4.900s 00:05:41.302 00:52:53 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.302 00:52:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.302 ************************************ 00:05:41.302 END TEST accel 00:05:41.302 ************************************ 00:05:41.302 00:52:53 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:41.302 00:52:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:41.302 00:52:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.302 00:52:53 -- common/autotest_common.sh@10 -- # set +x 00:05:41.302 ************************************ 00:05:41.302 START TEST accel_rpc 00:05:41.302 ************************************ 00:05:41.302 00:52:53 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:41.302 * Looking for test storage... 00:05:41.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:41.302 00:52:53 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:41.302 00:52:53 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1140353 00:05:41.302 00:52:53 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:41.302 00:52:53 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1140353 00:05:41.302 00:52:53 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 1140353 ']' 00:05:41.302 00:52:53 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.302 00:52:53 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:41.302 00:52:53 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.302 00:52:53 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:41.302 00:52:53 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.559 [2024-05-15 00:52:53.704590] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:41.559 [2024-05-15 00:52:53.704674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1140353 ] 00:05:41.559 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.559 [2024-05-15 00:52:53.770773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.559 [2024-05-15 00:52:53.877474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.559 00:52:53 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:41.559 00:52:53 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:41.559 00:52:53 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:41.559 00:52:53 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:41.559 00:52:53 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:41.559 00:52:53 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:41.559 00:52:53 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:41.559 00:52:53 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:41.559 00:52:53 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.559 00:52:53 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.559 ************************************ 00:05:41.559 START TEST accel_assign_opcode 00:05:41.559 ************************************ 00:05:41.559 00:52:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:05:41.559 00:52:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:41.559 00:52:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.559 00:52:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:41.559 [2024-05-15 00:52:53.934063] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:41.559 00:52:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.559 00:52:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:41.559 00:52:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.559 00:52:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:41.559 [2024-05-15 00:52:53.942078] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:41.559 00:52:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.559 00:52:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:41.559 00:52:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.559 00:52:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:41.816 00:52:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:41.816 00:52:54 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:41.816 00:52:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:41.816 00:52:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:41.816 00:52:54 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:41.816 00:52:54 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:41.816 00:52:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.074 software 00:05:42.074 00:05:42.074 real 0m0.294s 00:05:42.074 user 0m0.041s 00:05:42.074 sys 0m0.007s 00:05:42.074 00:52:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:42.074 00:52:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:42.074 ************************************ 00:05:42.074 END TEST accel_assign_opcode 00:05:42.074 ************************************ 00:05:42.074 00:52:54 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1140353 00:05:42.074 00:52:54 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 1140353 ']' 00:05:42.074 00:52:54 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 1140353 00:05:42.074 00:52:54 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:05:42.074 00:52:54 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:42.074 00:52:54 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1140353 00:05:42.074 00:52:54 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:42.074 00:52:54 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:42.074 00:52:54 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1140353' 00:05:42.074 killing process with pid 1140353 00:05:42.074 00:52:54 accel_rpc -- common/autotest_common.sh@965 -- # kill 1140353 00:05:42.074 00:52:54 accel_rpc -- common/autotest_common.sh@970 -- # wait 1140353 00:05:42.643 00:05:42.643 real 0m1.144s 00:05:42.643 user 0m1.070s 00:05:42.643 sys 0m0.436s 00:05:42.643 00:52:54 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:42.643 00:52:54 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.643 ************************************ 00:05:42.643 END TEST accel_rpc 00:05:42.643 ************************************ 00:05:42.643 00:52:54 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:42.643 00:52:54 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:42.643 00:52:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:42.643 00:52:54 -- common/autotest_common.sh@10 -- # set +x 00:05:42.643 ************************************ 00:05:42.643 START TEST app_cmdline 00:05:42.643 ************************************ 00:05:42.643 00:52:54 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:42.643 * Looking for test storage... 00:05:42.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:42.643 00:52:54 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:42.643 00:52:54 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1140562 00:05:42.643 00:52:54 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:42.643 00:52:54 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1140562 00:05:42.643 00:52:54 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 1140562 ']' 00:05:42.643 00:52:54 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.643 00:52:54 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:42.643 00:52:54 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.643 00:52:54 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:42.643 00:52:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:42.643 [2024-05-15 00:52:54.900274] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:05:42.644 [2024-05-15 00:52:54.900353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1140562 ] 00:05:42.644 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.644 [2024-05-15 00:52:54.966533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.901 [2024-05-15 00:52:55.072425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.468 00:52:55 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:43.468 00:52:55 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:05:43.468 00:52:55 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:43.726 { 00:05:43.726 "version": "SPDK v24.05-pre git sha1 297733650", 00:05:43.726 "fields": { 00:05:43.726 "major": 24, 00:05:43.726 "minor": 5, 00:05:43.726 "patch": 0, 00:05:43.726 "suffix": "-pre", 00:05:43.726 "commit": "297733650" 00:05:43.726 } 00:05:43.726 } 00:05:43.726 00:52:56 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:43.726 00:52:56 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:43.726 00:52:56 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:43.726 00:52:56 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:43.726 00:52:56 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:43.726 00:52:56 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.726 00:52:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:43.726 00:52:56 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:43.726 00:52:56 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:43.726 00:52:56 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.984 00:52:56 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:43.984 00:52:56 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:43.984 00:52:56 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:43.984 00:52:56 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:05:43.984 00:52:56 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:43.984 00:52:56 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:43.984 00:52:56 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.984 00:52:56 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:43.984 00:52:56 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.984 00:52:56 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:43.984 00:52:56 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.984 00:52:56 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:43.984 00:52:56 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:43.984 00:52:56 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:44.244 request: 00:05:44.244 { 00:05:44.244 "method": "env_dpdk_get_mem_stats", 00:05:44.244 "req_id": 1 00:05:44.244 } 00:05:44.244 Got JSON-RPC error response 00:05:44.244 response: 00:05:44.244 { 00:05:44.244 "code": -32601, 00:05:44.244 "message": "Method not found" 00:05:44.244 } 00:05:44.244 00:52:56 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:05:44.244 00:52:56 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:44.244 00:52:56 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:44.244 00:52:56 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:44.244 00:52:56 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1140562 00:05:44.245 00:52:56 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 1140562 ']' 00:05:44.245 00:52:56 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 1140562 00:05:44.245 00:52:56 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:05:44.245 00:52:56 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:44.245 00:52:56 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1140562 00:05:44.245 00:52:56 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:44.245 00:52:56 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:44.245 00:52:56 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1140562' 00:05:44.245 killing process with pid 1140562 00:05:44.245 00:52:56 app_cmdline -- common/autotest_common.sh@965 -- # kill 1140562 00:05:44.245 00:52:56 app_cmdline -- common/autotest_common.sh@970 -- # wait 1140562 00:05:44.811 00:05:44.811 real 0m2.127s 00:05:44.811 user 0m2.671s 00:05:44.811 sys 0m0.520s 00:05:44.811 00:52:56 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:44.811 00:52:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:44.811 ************************************ 00:05:44.811 END TEST app_cmdline 00:05:44.811 ************************************ 00:05:44.811 00:52:56 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:44.811 00:52:56 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:44.811 00:52:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.811 00:52:56 -- common/autotest_common.sh@10 -- # set +x 00:05:44.811 ************************************ 00:05:44.811 START TEST version 00:05:44.811 ************************************ 00:05:44.811 00:52:56 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:44.811 * Looking for test storage... 00:05:44.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:44.811 00:52:57 version -- app/version.sh@17 -- # get_header_version major 00:05:44.811 00:52:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:44.811 00:52:57 version -- app/version.sh@14 -- # cut -f2 00:05:44.811 00:52:57 version -- app/version.sh@14 -- # tr -d '"' 00:05:44.811 00:52:57 version -- app/version.sh@17 -- # major=24 00:05:44.811 00:52:57 version -- app/version.sh@18 -- # get_header_version minor 00:05:44.811 00:52:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:44.811 00:52:57 version -- app/version.sh@14 -- # cut -f2 00:05:44.811 00:52:57 version -- app/version.sh@14 -- # tr -d '"' 00:05:44.811 00:52:57 version -- app/version.sh@18 -- # minor=5 00:05:44.811 00:52:57 version -- app/version.sh@19 -- # get_header_version patch 00:05:44.811 00:52:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:44.811 00:52:57 version -- app/version.sh@14 -- # cut -f2 00:05:44.811 00:52:57 version -- app/version.sh@14 -- # tr -d '"' 00:05:44.811 00:52:57 version -- app/version.sh@19 -- # patch=0 00:05:44.811 00:52:57 version -- app/version.sh@20 -- # get_header_version suffix 00:05:44.811 00:52:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:44.811 00:52:57 version -- app/version.sh@14 -- # cut -f2 00:05:44.811 00:52:57 version -- app/version.sh@14 -- # tr -d '"' 00:05:44.811 00:52:57 version -- app/version.sh@20 -- # suffix=-pre 00:05:44.811 00:52:57 version -- app/version.sh@22 -- # version=24.5 00:05:44.811 00:52:57 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:44.811 00:52:57 version -- app/version.sh@28 -- # version=24.5rc0 00:05:44.811 00:52:57 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:44.811 00:52:57 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:44.811 00:52:57 version -- app/version.sh@30 -- # py_version=24.5rc0 00:05:44.811 00:52:57 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:05:44.811 00:05:44.811 real 0m0.105s 00:05:44.811 user 0m0.058s 00:05:44.811 sys 0m0.069s 00:05:44.811 00:52:57 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:44.811 00:52:57 version -- common/autotest_common.sh@10 -- # set +x 00:05:44.811 ************************************ 00:05:44.811 END TEST version 00:05:44.811 ************************************ 00:05:44.811 00:52:57 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:05:44.811 00:52:57 -- spdk/autotest.sh@194 -- # uname -s 00:05:44.811 00:52:57 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:44.811 00:52:57 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:44.811 00:52:57 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:44.811 00:52:57 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:44.811 00:52:57 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:44.811 00:52:57 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:44.811 00:52:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:44.811 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:05:44.811 00:52:57 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:44.811 00:52:57 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:05:44.811 00:52:57 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:05:44.811 00:52:57 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:05:44.811 00:52:57 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:05:44.811 00:52:57 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:05:44.811 00:52:57 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:44.811 00:52:57 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:44.811 00:52:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.811 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:05:44.811 ************************************ 00:05:44.811 START TEST nvmf_tcp 00:05:44.811 ************************************ 00:05:44.811 00:52:57 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:45.070 * Looking for test storage... 00:05:45.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:45.070 00:52:57 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:45.070 00:52:57 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:45.070 00:52:57 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:45.070 00:52:57 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:05:45.070 00:52:57 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:45.070 00:52:57 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:45.070 00:52:57 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:45.070 00:52:57 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:45.070 00:52:57 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:45.070 00:52:57 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:45.070 00:52:57 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:45.070 00:52:57 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:45.070 00:52:57 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:45.070 00:52:57 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:45.070 00:52:57 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:45.070 00:52:57 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:45.070 00:52:57 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:45.070 00:52:57 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:45.070 00:52:57 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:45.070 00:52:57 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:45.070 00:52:57 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:45.070 00:52:57 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.070 00:52:57 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.070 00:52:57 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.070 00:52:57 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.070 00:52:57 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.070 00:52:57 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.070 00:52:57 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:05:45.070 00:52:57 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.070 00:52:57 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:05:45.070 00:52:57 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:45.070 00:52:57 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:45.070 00:52:57 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:45.070 00:52:57 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:45.071 00:52:57 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:45.071 00:52:57 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:45.071 00:52:57 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:45.071 00:52:57 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:45.071 00:52:57 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:45.071 00:52:57 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:05:45.071 00:52:57 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:05:45.071 00:52:57 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:45.071 00:52:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:45.071 00:52:57 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:05:45.071 00:52:57 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:45.071 00:52:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:45.071 00:52:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.071 00:52:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:45.071 ************************************ 00:05:45.071 START TEST nvmf_example 00:05:45.071 ************************************ 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:45.071 * Looking for test storage... 00:05:45.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:05:45.071 00:52:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:47.602 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:47.602 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:47.602 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:47.602 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:47.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:47.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:05:47.602 00:05:47.602 --- 10.0.0.2 ping statistics --- 00:05:47.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:47.602 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:47.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:47.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:05:47.602 00:05:47.602 --- 10.0.0.1 ping statistics --- 00:05:47.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:47.602 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1142890 00:05:47.602 00:52:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:05:47.603 00:52:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:47.603 00:52:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1142890 00:05:47.603 00:52:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 1142890 ']' 00:05:47.603 00:52:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.603 00:52:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:47.603 00:52:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.603 00:52:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:47.603 00:52:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:47.861 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.794 00:53:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:48.794 00:53:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:05:48.794 00:53:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:05:48.795 00:53:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:48.795 00:53:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:48.795 00:53:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:48.795 00:53:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.795 00:53:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:48.795 00:53:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.795 00:53:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:05:48.795 00:53:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.795 00:53:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:48.795 00:53:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.795 00:53:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:05:48.795 00:53:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:48.795 00:53:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.795 00:53:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:48.795 00:53:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.795 00:53:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:05:48.795 00:53:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:05:48.795 00:53:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.795 00:53:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:48.795 00:53:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.795 00:53:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:48.795 00:53:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:48.795 00:53:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:48.795 00:53:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:48.795 00:53:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:05:48.795 00:53:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:05:48.795 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.003 Initializing NVMe Controllers 00:06:01.003 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:01.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:01.003 Initialization complete. Launching workers. 00:06:01.003 ======================================================== 00:06:01.003 Latency(us) 00:06:01.003 Device Information : IOPS MiB/s Average min max 00:06:01.003 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14653.36 57.24 4368.12 880.32 51907.07 00:06:01.003 ======================================================== 00:06:01.003 Total : 14653.36 57.24 4368.12 880.32 51907.07 00:06:01.003 00:06:01.003 00:53:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:01.003 00:53:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:01.003 00:53:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:01.003 00:53:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:01.003 00:53:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:01.003 00:53:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:01.003 00:53:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:01.003 00:53:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:01.003 rmmod nvme_tcp 00:06:01.003 rmmod nvme_fabrics 00:06:01.003 rmmod nvme_keyring 00:06:01.003 00:53:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:01.003 00:53:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:01.003 00:53:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:01.003 00:53:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1142890 ']' 00:06:01.003 00:53:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1142890 00:06:01.003 00:53:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 1142890 ']' 00:06:01.003 00:53:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 1142890 00:06:01.003 00:53:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:06:01.003 00:53:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:01.003 00:53:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1142890 00:06:01.003 00:53:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:06:01.003 00:53:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:06:01.003 00:53:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1142890' 00:06:01.003 killing process with pid 1142890 00:06:01.003 00:53:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 1142890 00:06:01.003 00:53:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 1142890 00:06:01.003 nvmf threads initialize successfully 00:06:01.003 bdev subsystem init successfully 00:06:01.003 created a nvmf target service 00:06:01.003 create targets's poll groups done 00:06:01.003 all subsystems of target started 00:06:01.003 nvmf target is running 00:06:01.003 all subsystems of target stopped 00:06:01.003 destroy targets's poll groups done 00:06:01.003 destroyed the nvmf target service 00:06:01.003 bdev subsystem finish successfully 00:06:01.003 nvmf threads destroy successfully 00:06:01.003 00:53:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:01.003 00:53:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:01.003 00:53:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:01.003 00:53:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:01.003 00:53:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:01.003 00:53:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:01.003 00:53:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:01.003 00:53:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:01.582 00:53:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:01.582 00:53:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:01.582 00:53:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:01.582 00:53:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:01.582 00:06:01.582 real 0m16.716s 00:06:01.582 user 0m46.609s 00:06:01.582 sys 0m3.619s 00:06:01.582 00:53:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:01.582 00:53:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:01.582 ************************************ 00:06:01.582 END TEST nvmf_example 00:06:01.582 ************************************ 00:06:01.842 00:53:13 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:01.842 00:53:13 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:01.842 00:53:13 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:01.842 00:53:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:01.842 ************************************ 00:06:01.842 START TEST nvmf_filesystem 00:06:01.842 ************************************ 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:01.842 * Looking for test storage... 00:06:01.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:01.842 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:01.843 #define SPDK_CONFIG_H 00:06:01.843 #define SPDK_CONFIG_APPS 1 00:06:01.843 #define SPDK_CONFIG_ARCH native 00:06:01.843 #undef SPDK_CONFIG_ASAN 00:06:01.843 #undef SPDK_CONFIG_AVAHI 00:06:01.843 #undef SPDK_CONFIG_CET 00:06:01.843 #define SPDK_CONFIG_COVERAGE 1 00:06:01.843 #define SPDK_CONFIG_CROSS_PREFIX 00:06:01.843 #undef SPDK_CONFIG_CRYPTO 00:06:01.843 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:01.843 #undef SPDK_CONFIG_CUSTOMOCF 00:06:01.843 #undef SPDK_CONFIG_DAOS 00:06:01.843 #define SPDK_CONFIG_DAOS_DIR 00:06:01.843 #define SPDK_CONFIG_DEBUG 1 00:06:01.843 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:01.843 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:01.843 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:01.843 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:01.843 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:01.843 #undef SPDK_CONFIG_DPDK_UADK 00:06:01.843 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:01.843 #define SPDK_CONFIG_EXAMPLES 1 00:06:01.843 #undef SPDK_CONFIG_FC 00:06:01.843 #define SPDK_CONFIG_FC_PATH 00:06:01.843 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:01.843 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:01.843 #undef SPDK_CONFIG_FUSE 00:06:01.843 #undef SPDK_CONFIG_FUZZER 00:06:01.843 #define SPDK_CONFIG_FUZZER_LIB 00:06:01.843 #undef SPDK_CONFIG_GOLANG 00:06:01.843 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:01.843 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:01.843 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:01.843 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:06:01.843 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:01.843 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:01.843 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:01.843 #define SPDK_CONFIG_IDXD 1 00:06:01.843 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:01.843 #undef SPDK_CONFIG_IPSEC_MB 00:06:01.843 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:01.843 #define SPDK_CONFIG_ISAL 1 00:06:01.843 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:01.843 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:01.843 #define SPDK_CONFIG_LIBDIR 00:06:01.843 #undef SPDK_CONFIG_LTO 00:06:01.843 #define SPDK_CONFIG_MAX_LCORES 00:06:01.843 #define SPDK_CONFIG_NVME_CUSE 1 00:06:01.843 #undef SPDK_CONFIG_OCF 00:06:01.843 #define SPDK_CONFIG_OCF_PATH 00:06:01.843 #define SPDK_CONFIG_OPENSSL_PATH 00:06:01.843 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:01.843 #define SPDK_CONFIG_PGO_DIR 00:06:01.843 #undef SPDK_CONFIG_PGO_USE 00:06:01.843 #define SPDK_CONFIG_PREFIX /usr/local 00:06:01.843 #undef SPDK_CONFIG_RAID5F 00:06:01.843 #undef SPDK_CONFIG_RBD 00:06:01.843 #define SPDK_CONFIG_RDMA 1 00:06:01.843 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:01.843 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:01.843 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:01.843 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:01.843 #define SPDK_CONFIG_SHARED 1 00:06:01.843 #undef SPDK_CONFIG_SMA 00:06:01.843 #define SPDK_CONFIG_TESTS 1 00:06:01.843 #undef SPDK_CONFIG_TSAN 00:06:01.843 #define SPDK_CONFIG_UBLK 1 00:06:01.843 #define SPDK_CONFIG_UBSAN 1 00:06:01.843 #undef SPDK_CONFIG_UNIT_TESTS 00:06:01.843 #undef SPDK_CONFIG_URING 00:06:01.843 #define SPDK_CONFIG_URING_PATH 00:06:01.843 #undef SPDK_CONFIG_URING_ZNS 00:06:01.843 #undef SPDK_CONFIG_USDT 00:06:01.843 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:01.843 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:01.843 #define SPDK_CONFIG_VFIO_USER 1 00:06:01.843 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:01.843 #define SPDK_CONFIG_VHOST 1 00:06:01.843 #define SPDK_CONFIG_VIRTIO 1 00:06:01.843 #undef SPDK_CONFIG_VTUNE 00:06:01.843 #define SPDK_CONFIG_VTUNE_DIR 00:06:01.843 #define SPDK_CONFIG_WERROR 1 00:06:01.843 #define SPDK_CONFIG_WPDK_DIR 00:06:01.843 #undef SPDK_CONFIG_XNVME 00:06:01.843 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:01.843 00:53:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:06:01.844 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j48 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 1144714 ]] 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 1144714 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:06:01.845 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.Tb5Yk8 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.Tb5Yk8/tests/target /tmp/spdk.Tb5Yk8 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=968667136 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4315762688 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=48361582592 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=61994729472 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=13633146880 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30941728768 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997364736 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=55635968 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12389986304 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12398948352 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8962048 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30995816448 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997364736 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=1548288 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6199468032 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6199472128 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:06:01.846 * Looking for test storage... 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=48361582592 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=15847739392 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:01.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:01.846 00:53:14 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:01.847 00:53:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.847 00:53:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.847 00:53:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.847 00:53:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:01.847 00:53:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.847 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:01.847 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:01.847 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:01.847 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:01.847 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:01.847 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:01.847 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:01.847 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:01.847 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:01.847 00:53:14 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:01.847 00:53:14 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:01.847 00:53:14 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:01.847 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:01.847 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:01.847 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:01.847 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:01.847 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:01.847 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:01.847 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:01.847 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:01.847 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:01.847 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:01.847 00:53:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:01.847 00:53:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:04.373 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:04.373 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:04.373 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:04.373 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:04.373 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:04.374 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:04.374 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:04.374 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:04.374 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:04.374 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:04.374 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:04.374 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:04.374 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:04.374 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:04.374 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:04.374 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:04.374 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:04.374 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:04.374 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:04.374 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:04.374 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:04.374 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:04.374 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:04.374 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:04.374 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:04.374 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:04.374 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:04.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:04.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:06:04.632 00:06:04.632 --- 10.0.0.2 ping statistics --- 00:06:04.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:04.632 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:04.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:04.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:06:04.632 00:06:04.632 --- 10.0.0.1 ping statistics --- 00:06:04.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:04.632 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.632 ************************************ 00:06:04.632 START TEST nvmf_filesystem_no_in_capsule 00:06:04.632 ************************************ 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1146640 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1146640 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 1146640 ']' 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:04.632 00:53:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:04.632 [2024-05-15 00:53:16.894836] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:04.632 [2024-05-15 00:53:16.894937] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:04.632 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.632 [2024-05-15 00:53:16.971546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:04.890 [2024-05-15 00:53:17.085770] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:04.890 [2024-05-15 00:53:17.085828] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:04.890 [2024-05-15 00:53:17.085856] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:04.890 [2024-05-15 00:53:17.085867] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:04.890 [2024-05-15 00:53:17.085876] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:04.890 [2024-05-15 00:53:17.085962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.890 [2024-05-15 00:53:17.086027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.890 [2024-05-15 00:53:17.086093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.890 [2024-05-15 00:53:17.086096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.824 00:53:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:05.824 00:53:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:06:05.824 00:53:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:05.824 00:53:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:05.824 00:53:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:05.824 00:53:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:05.824 00:53:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:05.824 00:53:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:05.824 00:53:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.824 00:53:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:05.824 [2024-05-15 00:53:17.878798] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:05.824 00:53:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.824 00:53:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:05.824 00:53:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.824 00:53:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:05.824 Malloc1 00:06:05.824 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.824 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:05.824 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.824 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:05.824 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.824 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:05.824 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.824 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:05.824 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.824 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:05.824 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.824 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:05.824 [2024-05-15 00:53:18.063305] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:05.824 [2024-05-15 00:53:18.063601] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:05.824 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.824 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:05.824 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:06:05.824 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:06:05.824 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:06:05.824 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:06:05.824 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:05.824 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.824 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:05.824 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.824 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:06:05.824 { 00:06:05.824 "name": "Malloc1", 00:06:05.824 "aliases": [ 00:06:05.824 "7527f8fb-9dd3-4f27-bc45-4ca0f2f5d057" 00:06:05.824 ], 00:06:05.824 "product_name": "Malloc disk", 00:06:05.824 "block_size": 512, 00:06:05.824 "num_blocks": 1048576, 00:06:05.824 "uuid": "7527f8fb-9dd3-4f27-bc45-4ca0f2f5d057", 00:06:05.825 "assigned_rate_limits": { 00:06:05.825 "rw_ios_per_sec": 0, 00:06:05.825 "rw_mbytes_per_sec": 0, 00:06:05.825 "r_mbytes_per_sec": 0, 00:06:05.825 "w_mbytes_per_sec": 0 00:06:05.825 }, 00:06:05.825 "claimed": true, 00:06:05.825 "claim_type": "exclusive_write", 00:06:05.825 "zoned": false, 00:06:05.825 "supported_io_types": { 00:06:05.825 "read": true, 00:06:05.825 "write": true, 00:06:05.825 "unmap": true, 00:06:05.825 "write_zeroes": true, 00:06:05.825 "flush": true, 00:06:05.825 "reset": true, 00:06:05.825 "compare": false, 00:06:05.825 "compare_and_write": false, 00:06:05.825 "abort": true, 00:06:05.825 "nvme_admin": false, 00:06:05.825 "nvme_io": false 00:06:05.825 }, 00:06:05.825 "memory_domains": [ 00:06:05.825 { 00:06:05.825 "dma_device_id": "system", 00:06:05.825 "dma_device_type": 1 00:06:05.825 }, 00:06:05.825 { 00:06:05.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.825 "dma_device_type": 2 00:06:05.825 } 00:06:05.825 ], 00:06:05.825 "driver_specific": {} 00:06:05.825 } 00:06:05.825 ]' 00:06:05.825 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:06:05.825 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:06:05.825 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:06:05.825 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:06:05.825 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:06:05.825 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:06:05.825 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:05.825 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:06.758 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:06.758 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:06:06.758 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:06:06.758 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:06:06.758 00:53:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:06:08.657 00:53:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:06:08.657 00:53:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:06:08.657 00:53:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:06:08.657 00:53:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:06:08.657 00:53:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:06:08.657 00:53:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:06:08.657 00:53:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:08.657 00:53:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:08.657 00:53:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:08.657 00:53:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:08.657 00:53:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:08.657 00:53:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:08.657 00:53:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:08.657 00:53:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:08.657 00:53:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:08.657 00:53:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:08.657 00:53:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:08.915 00:53:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:09.480 00:53:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:10.853 00:53:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:10.853 00:53:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:10.853 00:53:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:10.853 00:53:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.853 00:53:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:10.853 ************************************ 00:06:10.853 START TEST filesystem_ext4 00:06:10.853 ************************************ 00:06:10.853 00:53:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:10.853 00:53:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:10.853 00:53:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:10.853 00:53:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:10.853 00:53:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:06:10.853 00:53:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:10.853 00:53:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:06:10.853 00:53:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:06:10.853 00:53:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:06:10.853 00:53:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:06:10.853 00:53:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:10.853 mke2fs 1.46.5 (30-Dec-2021) 00:06:10.853 Discarding device blocks: 0/522240 done 00:06:10.853 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:10.853 Filesystem UUID: 18d1506a-b7ad-4729-8f0a-f574499b5b46 00:06:10.853 Superblock backups stored on blocks: 00:06:10.853 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:10.853 00:06:10.853 Allocating group tables: 0/64 done 00:06:10.853 Writing inode tables: 0/64 done 00:06:10.853 Creating journal (8192 blocks): done 00:06:10.853 Writing superblocks and filesystem accounting information: 0/64 done 00:06:10.853 00:06:10.853 00:53:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:06:10.853 00:53:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:11.787 00:53:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:11.787 00:53:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:11.787 00:53:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:11.787 00:53:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:11.787 00:53:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:11.787 00:53:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:11.787 00:53:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1146640 00:06:11.787 00:53:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:11.787 00:53:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:11.787 00:53:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:11.787 00:53:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:11.787 00:06:11.787 real 0m1.042s 00:06:11.787 user 0m0.022s 00:06:11.787 sys 0m0.034s 00:06:11.787 00:53:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:11.787 00:53:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:11.787 ************************************ 00:06:11.787 END TEST filesystem_ext4 00:06:11.787 ************************************ 00:06:11.787 00:53:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:11.787 00:53:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:11.787 00:53:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.787 00:53:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:11.787 ************************************ 00:06:11.787 START TEST filesystem_btrfs 00:06:11.787 ************************************ 00:06:11.787 00:53:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:11.787 00:53:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:11.787 00:53:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:11.787 00:53:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:11.787 00:53:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:06:11.787 00:53:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:11.787 00:53:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:06:11.787 00:53:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:06:11.787 00:53:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:06:11.787 00:53:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:06:11.787 00:53:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:12.046 btrfs-progs v6.6.2 00:06:12.046 See https://btrfs.readthedocs.io for more information. 00:06:12.046 00:06:12.046 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:12.046 NOTE: several default settings have changed in version 5.15, please make sure 00:06:12.046 this does not affect your deployments: 00:06:12.046 - DUP for metadata (-m dup) 00:06:12.046 - enabled no-holes (-O no-holes) 00:06:12.046 - enabled free-space-tree (-R free-space-tree) 00:06:12.046 00:06:12.046 Label: (null) 00:06:12.046 UUID: e8f9ee21-d475-4189-a83b-e980c57bf34c 00:06:12.046 Node size: 16384 00:06:12.046 Sector size: 4096 00:06:12.046 Filesystem size: 510.00MiB 00:06:12.046 Block group profiles: 00:06:12.046 Data: single 8.00MiB 00:06:12.046 Metadata: DUP 32.00MiB 00:06:12.046 System: DUP 8.00MiB 00:06:12.046 SSD detected: yes 00:06:12.046 Zoned device: no 00:06:12.046 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:12.046 Runtime features: free-space-tree 00:06:12.046 Checksum: crc32c 00:06:12.046 Number of devices: 1 00:06:12.046 Devices: 00:06:12.046 ID SIZE PATH 00:06:12.046 1 510.00MiB /dev/nvme0n1p1 00:06:12.046 00:06:12.046 00:53:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:06:12.046 00:53:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:12.612 00:53:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:12.612 00:53:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:12.612 00:53:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:12.612 00:53:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:12.612 00:53:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:12.612 00:53:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:12.612 00:53:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1146640 00:06:12.612 00:53:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:12.612 00:53:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:12.612 00:53:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:12.612 00:53:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:12.612 00:06:12.612 real 0m1.038s 00:06:12.612 user 0m0.015s 00:06:12.612 sys 0m0.048s 00:06:12.612 00:53:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.612 00:53:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:12.612 ************************************ 00:06:12.612 END TEST filesystem_btrfs 00:06:12.612 ************************************ 00:06:12.612 00:53:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:12.612 00:53:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:12.612 00:53:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.612 00:53:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:12.870 ************************************ 00:06:12.870 START TEST filesystem_xfs 00:06:12.870 ************************************ 00:06:12.870 00:53:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:06:12.870 00:53:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:12.870 00:53:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:12.870 00:53:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:12.870 00:53:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:06:12.870 00:53:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:12.870 00:53:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:06:12.870 00:53:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:06:12.870 00:53:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:06:12.870 00:53:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:06:12.870 00:53:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:12.870 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:12.870 = sectsz=512 attr=2, projid32bit=1 00:06:12.870 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:12.870 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:12.870 data = bsize=4096 blocks=130560, imaxpct=25 00:06:12.870 = sunit=0 swidth=0 blks 00:06:12.870 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:12.870 log =internal log bsize=4096 blocks=16384, version=2 00:06:12.870 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:12.870 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:13.804 Discarding blocks...Done. 00:06:13.804 00:53:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:06:13.804 00:53:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:16.332 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:16.332 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:16.332 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:16.332 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:16.332 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:16.332 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:16.332 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1146640 00:06:16.332 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:16.332 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:16.332 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:16.332 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:16.332 00:06:16.332 real 0m3.458s 00:06:16.332 user 0m0.019s 00:06:16.332 sys 0m0.041s 00:06:16.332 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:16.332 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:16.332 ************************************ 00:06:16.332 END TEST filesystem_xfs 00:06:16.332 ************************************ 00:06:16.332 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:16.591 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:16.591 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:16.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:16.591 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:16.591 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:06:16.591 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:06:16.591 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:16.591 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:06:16.591 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:16.591 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:06:16.591 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:16.591 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.591 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:16.591 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.591 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:16.591 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1146640 00:06:16.591 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 1146640 ']' 00:06:16.591 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 1146640 00:06:16.591 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:06:16.591 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:16.591 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1146640 00:06:16.591 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:16.591 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:16.591 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1146640' 00:06:16.591 killing process with pid 1146640 00:06:16.591 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 1146640 00:06:16.591 [2024-05-15 00:53:28.879349] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:16.591 00:53:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 1146640 00:06:17.185 00:53:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:17.185 00:06:17.185 real 0m12.528s 00:06:17.185 user 0m48.113s 00:06:17.185 sys 0m1.732s 00:06:17.185 00:53:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.185 00:53:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:17.185 ************************************ 00:06:17.185 END TEST nvmf_filesystem_no_in_capsule 00:06:17.185 ************************************ 00:06:17.185 00:53:29 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:17.185 00:53:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:17.185 00:53:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:17.185 00:53:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.185 ************************************ 00:06:17.185 START TEST nvmf_filesystem_in_capsule 00:06:17.185 ************************************ 00:06:17.185 00:53:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:06:17.185 00:53:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:17.185 00:53:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:17.185 00:53:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:17.186 00:53:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:17.186 00:53:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:17.186 00:53:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1148334 00:06:17.186 00:53:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1148334 00:06:17.186 00:53:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:17.186 00:53:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 1148334 ']' 00:06:17.186 00:53:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.186 00:53:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:17.186 00:53:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.186 00:53:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:17.186 00:53:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:17.186 [2024-05-15 00:53:29.478804] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:17.186 [2024-05-15 00:53:29.478880] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:17.186 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.454 [2024-05-15 00:53:29.557474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.454 [2024-05-15 00:53:29.679378] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:17.454 [2024-05-15 00:53:29.679450] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:17.454 [2024-05-15 00:53:29.679466] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:17.454 [2024-05-15 00:53:29.679488] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:17.454 [2024-05-15 00:53:29.679501] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:17.454 [2024-05-15 00:53:29.679571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.454 [2024-05-15 00:53:29.679627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.454 [2024-05-15 00:53:29.679678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.454 [2024-05-15 00:53:29.679681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:18.389 [2024-05-15 00:53:30.475062] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:18.389 Malloc1 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:18.389 [2024-05-15 00:53:30.659448] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:18.389 [2024-05-15 00:53:30.659744] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:06:18.389 { 00:06:18.389 "name": "Malloc1", 00:06:18.389 "aliases": [ 00:06:18.389 "66b48b7c-ee74-40de-b008-8d3973e6cbbc" 00:06:18.389 ], 00:06:18.389 "product_name": "Malloc disk", 00:06:18.389 "block_size": 512, 00:06:18.389 "num_blocks": 1048576, 00:06:18.389 "uuid": "66b48b7c-ee74-40de-b008-8d3973e6cbbc", 00:06:18.389 "assigned_rate_limits": { 00:06:18.389 "rw_ios_per_sec": 0, 00:06:18.389 "rw_mbytes_per_sec": 0, 00:06:18.389 "r_mbytes_per_sec": 0, 00:06:18.389 "w_mbytes_per_sec": 0 00:06:18.389 }, 00:06:18.389 "claimed": true, 00:06:18.389 "claim_type": "exclusive_write", 00:06:18.389 "zoned": false, 00:06:18.389 "supported_io_types": { 00:06:18.389 "read": true, 00:06:18.389 "write": true, 00:06:18.389 "unmap": true, 00:06:18.389 "write_zeroes": true, 00:06:18.389 "flush": true, 00:06:18.389 "reset": true, 00:06:18.389 "compare": false, 00:06:18.389 "compare_and_write": false, 00:06:18.389 "abort": true, 00:06:18.389 "nvme_admin": false, 00:06:18.389 "nvme_io": false 00:06:18.389 }, 00:06:18.389 "memory_domains": [ 00:06:18.389 { 00:06:18.389 "dma_device_id": "system", 00:06:18.389 "dma_device_type": 1 00:06:18.389 }, 00:06:18.389 { 00:06:18.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:18.389 "dma_device_type": 2 00:06:18.389 } 00:06:18.389 ], 00:06:18.389 "driver_specific": {} 00:06:18.389 } 00:06:18.389 ]' 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:06:18.389 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:18.390 00:53:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:18.956 00:53:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:18.956 00:53:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:06:18.956 00:53:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:06:18.956 00:53:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:06:18.956 00:53:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:06:21.483 00:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:06:21.483 00:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:06:21.483 00:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:06:21.483 00:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:06:21.483 00:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:06:21.483 00:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:06:21.483 00:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:21.483 00:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:21.483 00:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:21.483 00:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:21.483 00:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:21.483 00:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:21.483 00:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:21.483 00:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:21.483 00:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:21.483 00:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:21.483 00:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:21.483 00:53:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:22.416 00:53:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:23.350 00:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:23.350 00:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:23.350 00:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:23.350 00:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:23.350 00:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:23.350 ************************************ 00:06:23.350 START TEST filesystem_in_capsule_ext4 00:06:23.350 ************************************ 00:06:23.350 00:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:23.350 00:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:23.350 00:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:23.350 00:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:23.350 00:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:06:23.350 00:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:23.350 00:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:06:23.350 00:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:06:23.350 00:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:06:23.350 00:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:06:23.350 00:53:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:23.350 mke2fs 1.46.5 (30-Dec-2021) 00:06:23.350 Discarding device blocks: 0/522240 done 00:06:23.350 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:23.350 Filesystem UUID: e756a9d5-7e1d-41e3-9474-8748cc3b067d 00:06:23.350 Superblock backups stored on blocks: 00:06:23.350 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:23.350 00:06:23.350 Allocating group tables: 0/64 done 00:06:23.350 Writing inode tables: 0/64 done 00:06:23.609 Creating journal (8192 blocks): done 00:06:24.432 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:06:24.432 00:06:24.432 00:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:06:24.432 00:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:24.690 00:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:24.690 00:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:24.690 00:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:24.690 00:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:24.690 00:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:24.690 00:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:24.690 00:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1148334 00:06:24.690 00:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:24.690 00:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:24.690 00:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:24.690 00:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:24.690 00:06:24.690 real 0m1.476s 00:06:24.690 user 0m0.015s 00:06:24.690 sys 0m0.035s 00:06:24.690 00:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:24.690 00:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:24.690 ************************************ 00:06:24.690 END TEST filesystem_in_capsule_ext4 00:06:24.690 ************************************ 00:06:24.690 00:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:24.690 00:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:24.690 00:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:24.690 00:53:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:24.690 ************************************ 00:06:24.690 START TEST filesystem_in_capsule_btrfs 00:06:24.690 ************************************ 00:06:24.691 00:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:24.691 00:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:24.691 00:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:24.691 00:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:24.691 00:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:06:24.691 00:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:24.691 00:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:06:24.691 00:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:06:24.691 00:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:06:24.691 00:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:06:24.691 00:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:25.256 btrfs-progs v6.6.2 00:06:25.256 See https://btrfs.readthedocs.io for more information. 00:06:25.256 00:06:25.256 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:25.256 NOTE: several default settings have changed in version 5.15, please make sure 00:06:25.256 this does not affect your deployments: 00:06:25.256 - DUP for metadata (-m dup) 00:06:25.256 - enabled no-holes (-O no-holes) 00:06:25.256 - enabled free-space-tree (-R free-space-tree) 00:06:25.256 00:06:25.256 Label: (null) 00:06:25.256 UUID: 18ba7da9-49c1-44c0-848b-cbe3641dc94e 00:06:25.256 Node size: 16384 00:06:25.256 Sector size: 4096 00:06:25.256 Filesystem size: 510.00MiB 00:06:25.256 Block group profiles: 00:06:25.256 Data: single 8.00MiB 00:06:25.256 Metadata: DUP 32.00MiB 00:06:25.256 System: DUP 8.00MiB 00:06:25.256 SSD detected: yes 00:06:25.256 Zoned device: no 00:06:25.256 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:25.256 Runtime features: free-space-tree 00:06:25.256 Checksum: crc32c 00:06:25.256 Number of devices: 1 00:06:25.256 Devices: 00:06:25.256 ID SIZE PATH 00:06:25.256 1 510.00MiB /dev/nvme0n1p1 00:06:25.256 00:06:25.256 00:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:06:25.256 00:53:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:26.190 00:53:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:26.190 00:53:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:26.190 00:53:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:26.190 00:53:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:26.190 00:53:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:26.190 00:53:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:26.190 00:53:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1148334 00:06:26.190 00:53:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:26.190 00:53:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:26.190 00:53:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:26.190 00:53:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:26.190 00:06:26.190 real 0m1.233s 00:06:26.190 user 0m0.018s 00:06:26.190 sys 0m0.046s 00:06:26.190 00:53:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:26.190 00:53:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:26.190 ************************************ 00:06:26.190 END TEST filesystem_in_capsule_btrfs 00:06:26.190 ************************************ 00:06:26.190 00:53:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:26.190 00:53:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:26.190 00:53:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:26.190 00:53:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:26.190 ************************************ 00:06:26.190 START TEST filesystem_in_capsule_xfs 00:06:26.190 ************************************ 00:06:26.190 00:53:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:06:26.190 00:53:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:26.190 00:53:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:26.190 00:53:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:26.190 00:53:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:06:26.190 00:53:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:26.190 00:53:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:06:26.190 00:53:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:06:26.190 00:53:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:06:26.190 00:53:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:06:26.190 00:53:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:26.190 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:26.190 = sectsz=512 attr=2, projid32bit=1 00:06:26.190 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:26.190 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:26.190 data = bsize=4096 blocks=130560, imaxpct=25 00:06:26.190 = sunit=0 swidth=0 blks 00:06:26.190 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:26.190 log =internal log bsize=4096 blocks=16384, version=2 00:06:26.190 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:26.190 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:26.756 Discarding blocks...Done. 00:06:26.756 00:53:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:06:26.756 00:53:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:29.284 00:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:29.284 00:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:29.284 00:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:29.542 00:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:29.542 00:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:29.542 00:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:29.542 00:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1148334 00:06:29.542 00:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:29.542 00:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:29.542 00:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:29.542 00:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:29.542 00:06:29.542 real 0m3.392s 00:06:29.542 user 0m0.020s 00:06:29.542 sys 0m0.042s 00:06:29.542 00:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.542 00:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:29.542 ************************************ 00:06:29.542 END TEST filesystem_in_capsule_xfs 00:06:29.542 ************************************ 00:06:29.542 00:53:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:29.800 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:29.800 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:29.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:29.800 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:29.800 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:06:29.800 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:06:29.800 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:29.800 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:06:29.800 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:29.800 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:06:29.800 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:29.800 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.800 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:29.800 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.800 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:29.800 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1148334 00:06:29.800 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 1148334 ']' 00:06:29.800 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 1148334 00:06:29.800 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:06:29.800 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:29.800 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1148334 00:06:29.800 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:29.800 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:29.800 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1148334' 00:06:29.800 killing process with pid 1148334 00:06:29.800 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 1148334 00:06:29.800 [2024-05-15 00:53:42.121431] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:29.800 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 1148334 00:06:30.366 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:30.366 00:06:30.366 real 0m13.188s 00:06:30.366 user 0m50.706s 00:06:30.366 sys 0m1.740s 00:06:30.366 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.366 00:53:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:30.366 ************************************ 00:06:30.366 END TEST nvmf_filesystem_in_capsule 00:06:30.366 ************************************ 00:06:30.366 00:53:42 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:30.366 00:53:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:30.366 00:53:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:30.366 00:53:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:30.366 00:53:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:30.366 00:53:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:30.366 00:53:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:30.366 rmmod nvme_tcp 00:06:30.366 rmmod nvme_fabrics 00:06:30.366 rmmod nvme_keyring 00:06:30.366 00:53:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:30.366 00:53:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:30.366 00:53:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:30.366 00:53:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:30.366 00:53:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:30.366 00:53:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:30.366 00:53:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:30.366 00:53:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:30.366 00:53:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:30.366 00:53:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:30.366 00:53:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:30.366 00:53:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:32.907 00:53:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:32.907 00:06:32.907 real 0m30.701s 00:06:32.907 user 1m39.892s 00:06:32.907 sys 0m5.409s 00:06:32.907 00:53:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.907 00:53:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.907 ************************************ 00:06:32.907 END TEST nvmf_filesystem 00:06:32.907 ************************************ 00:06:32.907 00:53:44 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:32.907 00:53:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:32.907 00:53:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.907 00:53:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:32.907 ************************************ 00:06:32.907 START TEST nvmf_target_discovery 00:06:32.907 ************************************ 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:32.907 * Looking for test storage... 00:06:32.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:32.907 00:53:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:35.498 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:35.498 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:35.498 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:35.499 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:35.499 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:35.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:35.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:06:35.499 00:06:35.499 --- 10.0.0.2 ping statistics --- 00:06:35.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:35.499 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:35.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:35.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:06:35.499 00:06:35.499 --- 10.0.0.1 ping statistics --- 00:06:35.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:35.499 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1152366 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1152366 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 1152366 ']' 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:35.499 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.499 [2024-05-15 00:53:47.561271] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:35.499 [2024-05-15 00:53:47.561343] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:35.499 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.499 [2024-05-15 00:53:47.644419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:35.499 [2024-05-15 00:53:47.769865] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:35.499 [2024-05-15 00:53:47.769949] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:35.499 [2024-05-15 00:53:47.769968] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:35.499 [2024-05-15 00:53:47.769983] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:35.499 [2024-05-15 00:53:47.769995] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:35.499 [2024-05-15 00:53:47.771958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.499 [2024-05-15 00:53:47.771992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.499 [2024-05-15 00:53:47.772043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.499 [2024-05-15 00:53:47.772047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.757 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:35.757 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:06:35.757 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:35.757 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:35.757 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.757 00:53:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:35.757 00:53:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:35.757 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.757 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.757 [2024-05-15 00:53:47.938021] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:35.757 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.757 00:53:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:35.757 00:53:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:35.757 00:53:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:35.757 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.757 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.757 Null1 00:06:35.757 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.757 00:53:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:35.757 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.757 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.757 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.758 00:53:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:35.758 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.758 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.758 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.758 00:53:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:35.758 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.758 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.758 [2024-05-15 00:53:47.978070] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:35.758 [2024-05-15 00:53:47.978399] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:35.758 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.758 00:53:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:35.758 00:53:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:35.758 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.758 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.758 Null2 00:06:35.758 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.758 00:53:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:35.758 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.758 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.758 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.758 00:53:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:35.758 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.758 00:53:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.758 Null3 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.758 Null4 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.758 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:06:36.016 00:06:36.016 Discovery Log Number of Records 6, Generation counter 6 00:06:36.016 =====Discovery Log Entry 0====== 00:06:36.016 trtype: tcp 00:06:36.016 adrfam: ipv4 00:06:36.016 subtype: current discovery subsystem 00:06:36.016 treq: not required 00:06:36.016 portid: 0 00:06:36.016 trsvcid: 4420 00:06:36.016 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:36.016 traddr: 10.0.0.2 00:06:36.016 eflags: explicit discovery connections, duplicate discovery information 00:06:36.016 sectype: none 00:06:36.016 =====Discovery Log Entry 1====== 00:06:36.016 trtype: tcp 00:06:36.016 adrfam: ipv4 00:06:36.016 subtype: nvme subsystem 00:06:36.016 treq: not required 00:06:36.016 portid: 0 00:06:36.016 trsvcid: 4420 00:06:36.016 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:36.016 traddr: 10.0.0.2 00:06:36.016 eflags: none 00:06:36.016 sectype: none 00:06:36.016 =====Discovery Log Entry 2====== 00:06:36.016 trtype: tcp 00:06:36.016 adrfam: ipv4 00:06:36.016 subtype: nvme subsystem 00:06:36.016 treq: not required 00:06:36.016 portid: 0 00:06:36.016 trsvcid: 4420 00:06:36.016 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:36.016 traddr: 10.0.0.2 00:06:36.016 eflags: none 00:06:36.016 sectype: none 00:06:36.016 =====Discovery Log Entry 3====== 00:06:36.016 trtype: tcp 00:06:36.016 adrfam: ipv4 00:06:36.016 subtype: nvme subsystem 00:06:36.016 treq: not required 00:06:36.016 portid: 0 00:06:36.016 trsvcid: 4420 00:06:36.016 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:36.016 traddr: 10.0.0.2 00:06:36.016 eflags: none 00:06:36.016 sectype: none 00:06:36.016 =====Discovery Log Entry 4====== 00:06:36.016 trtype: tcp 00:06:36.016 adrfam: ipv4 00:06:36.016 subtype: nvme subsystem 00:06:36.016 treq: not required 00:06:36.016 portid: 0 00:06:36.016 trsvcid: 4420 00:06:36.016 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:36.016 traddr: 10.0.0.2 00:06:36.016 eflags: none 00:06:36.016 sectype: none 00:06:36.016 =====Discovery Log Entry 5====== 00:06:36.016 trtype: tcp 00:06:36.016 adrfam: ipv4 00:06:36.016 subtype: discovery subsystem referral 00:06:36.016 treq: not required 00:06:36.016 portid: 0 00:06:36.016 trsvcid: 4430 00:06:36.016 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:36.016 traddr: 10.0.0.2 00:06:36.016 eflags: none 00:06:36.016 sectype: none 00:06:36.016 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:36.016 Perform nvmf subsystem discovery via RPC 00:06:36.016 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:36.016 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.016 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.016 [ 00:06:36.016 { 00:06:36.016 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:36.016 "subtype": "Discovery", 00:06:36.016 "listen_addresses": [ 00:06:36.016 { 00:06:36.016 "trtype": "TCP", 00:06:36.016 "adrfam": "IPv4", 00:06:36.016 "traddr": "10.0.0.2", 00:06:36.016 "trsvcid": "4420" 00:06:36.016 } 00:06:36.016 ], 00:06:36.016 "allow_any_host": true, 00:06:36.016 "hosts": [] 00:06:36.016 }, 00:06:36.016 { 00:06:36.016 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:36.016 "subtype": "NVMe", 00:06:36.016 "listen_addresses": [ 00:06:36.016 { 00:06:36.016 "trtype": "TCP", 00:06:36.016 "adrfam": "IPv4", 00:06:36.016 "traddr": "10.0.0.2", 00:06:36.016 "trsvcid": "4420" 00:06:36.016 } 00:06:36.016 ], 00:06:36.016 "allow_any_host": true, 00:06:36.016 "hosts": [], 00:06:36.016 "serial_number": "SPDK00000000000001", 00:06:36.016 "model_number": "SPDK bdev Controller", 00:06:36.017 "max_namespaces": 32, 00:06:36.017 "min_cntlid": 1, 00:06:36.017 "max_cntlid": 65519, 00:06:36.017 "namespaces": [ 00:06:36.017 { 00:06:36.017 "nsid": 1, 00:06:36.017 "bdev_name": "Null1", 00:06:36.017 "name": "Null1", 00:06:36.017 "nguid": "716D32665CE04161B1C18D32C6CB6474", 00:06:36.017 "uuid": "716d3266-5ce0-4161-b1c1-8d32c6cb6474" 00:06:36.017 } 00:06:36.017 ] 00:06:36.017 }, 00:06:36.017 { 00:06:36.017 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:36.017 "subtype": "NVMe", 00:06:36.017 "listen_addresses": [ 00:06:36.017 { 00:06:36.017 "trtype": "TCP", 00:06:36.017 "adrfam": "IPv4", 00:06:36.017 "traddr": "10.0.0.2", 00:06:36.017 "trsvcid": "4420" 00:06:36.017 } 00:06:36.017 ], 00:06:36.017 "allow_any_host": true, 00:06:36.017 "hosts": [], 00:06:36.017 "serial_number": "SPDK00000000000002", 00:06:36.017 "model_number": "SPDK bdev Controller", 00:06:36.017 "max_namespaces": 32, 00:06:36.017 "min_cntlid": 1, 00:06:36.017 "max_cntlid": 65519, 00:06:36.017 "namespaces": [ 00:06:36.017 { 00:06:36.017 "nsid": 1, 00:06:36.017 "bdev_name": "Null2", 00:06:36.017 "name": "Null2", 00:06:36.017 "nguid": "CDB82484F4714870867D5969BA75B66A", 00:06:36.017 "uuid": "cdb82484-f471-4870-867d-5969ba75b66a" 00:06:36.017 } 00:06:36.017 ] 00:06:36.017 }, 00:06:36.017 { 00:06:36.017 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:36.017 "subtype": "NVMe", 00:06:36.017 "listen_addresses": [ 00:06:36.017 { 00:06:36.017 "trtype": "TCP", 00:06:36.017 "adrfam": "IPv4", 00:06:36.017 "traddr": "10.0.0.2", 00:06:36.017 "trsvcid": "4420" 00:06:36.017 } 00:06:36.017 ], 00:06:36.017 "allow_any_host": true, 00:06:36.017 "hosts": [], 00:06:36.017 "serial_number": "SPDK00000000000003", 00:06:36.017 "model_number": "SPDK bdev Controller", 00:06:36.017 "max_namespaces": 32, 00:06:36.017 "min_cntlid": 1, 00:06:36.017 "max_cntlid": 65519, 00:06:36.017 "namespaces": [ 00:06:36.017 { 00:06:36.017 "nsid": 1, 00:06:36.017 "bdev_name": "Null3", 00:06:36.017 "name": "Null3", 00:06:36.017 "nguid": "9CE353DA5BBE437EA29A680380F665DA", 00:06:36.017 "uuid": "9ce353da-5bbe-437e-a29a-680380f665da" 00:06:36.017 } 00:06:36.017 ] 00:06:36.017 }, 00:06:36.017 { 00:06:36.017 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:36.017 "subtype": "NVMe", 00:06:36.017 "listen_addresses": [ 00:06:36.017 { 00:06:36.017 "trtype": "TCP", 00:06:36.017 "adrfam": "IPv4", 00:06:36.017 "traddr": "10.0.0.2", 00:06:36.017 "trsvcid": "4420" 00:06:36.017 } 00:06:36.017 ], 00:06:36.017 "allow_any_host": true, 00:06:36.017 "hosts": [], 00:06:36.017 "serial_number": "SPDK00000000000004", 00:06:36.017 "model_number": "SPDK bdev Controller", 00:06:36.017 "max_namespaces": 32, 00:06:36.017 "min_cntlid": 1, 00:06:36.017 "max_cntlid": 65519, 00:06:36.017 "namespaces": [ 00:06:36.017 { 00:06:36.017 "nsid": 1, 00:06:36.017 "bdev_name": "Null4", 00:06:36.017 "name": "Null4", 00:06:36.017 "nguid": "2C52E6DAB9C34E5DB41FDD046BD22286", 00:06:36.017 "uuid": "2c52e6da-b9c3-4e5d-b41f-dd046bd22286" 00:06:36.017 } 00:06:36.017 ] 00:06:36.017 } 00:06:36.017 ] 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:36.017 00:53:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:36.017 rmmod nvme_tcp 00:06:36.017 rmmod nvme_fabrics 00:06:36.276 rmmod nvme_keyring 00:06:36.276 00:53:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:36.276 00:53:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:36.276 00:53:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:36.276 00:53:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1152366 ']' 00:06:36.276 00:53:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1152366 00:06:36.276 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 1152366 ']' 00:06:36.276 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 1152366 00:06:36.276 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:06:36.276 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:36.276 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1152366 00:06:36.276 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:36.276 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:36.276 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1152366' 00:06:36.276 killing process with pid 1152366 00:06:36.276 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 1152366 00:06:36.276 [2024-05-15 00:53:48.457954] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:36.276 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 1152366 00:06:36.536 00:53:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:36.536 00:53:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:36.536 00:53:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:36.536 00:53:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:36.536 00:53:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:36.536 00:53:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:36.536 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:36.536 00:53:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.441 00:53:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:38.441 00:06:38.441 real 0m5.991s 00:06:38.441 user 0m4.692s 00:06:38.441 sys 0m2.182s 00:06:38.441 00:53:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.441 00:53:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:38.441 ************************************ 00:06:38.441 END TEST nvmf_target_discovery 00:06:38.441 ************************************ 00:06:38.441 00:53:50 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:38.441 00:53:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:38.441 00:53:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.441 00:53:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:38.700 ************************************ 00:06:38.700 START TEST nvmf_referrals 00:06:38.700 ************************************ 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:38.700 * Looking for test storage... 00:06:38.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:06:38.700 00:53:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:41.232 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:41.232 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:41.232 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:41.232 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:41.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:41.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:06:41.232 00:06:41.232 --- 10.0.0.2 ping statistics --- 00:06:41.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:41.232 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:41.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:41.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:06:41.232 00:06:41.232 --- 10.0.0.1 ping statistics --- 00:06:41.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:41.232 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1154758 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1154758 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 1154758 ']' 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:41.232 00:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:41.232 [2024-05-15 00:53:53.599879] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:41.232 [2024-05-15 00:53:53.599973] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:41.490 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.490 [2024-05-15 00:53:53.681032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:41.490 [2024-05-15 00:53:53.807733] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:41.490 [2024-05-15 00:53:53.807808] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:41.490 [2024-05-15 00:53:53.807824] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:41.490 [2024-05-15 00:53:53.807838] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:41.490 [2024-05-15 00:53:53.807850] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:41.490 [2024-05-15 00:53:53.809957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.490 [2024-05-15 00:53:53.809986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.490 [2024-05-15 00:53:53.810039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:41.490 [2024-05-15 00:53:53.810044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.749 00:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:41.749 00:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:06:41.749 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:41.749 00:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:41.749 00:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:41.749 00:53:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:41.749 00:53:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:41.749 00:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.749 00:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:41.749 [2024-05-15 00:53:53.977028] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:41.749 00:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.749 00:53:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:41.749 00:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.749 00:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:41.749 [2024-05-15 00:53:53.988995] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:41.749 [2024-05-15 00:53:53.989311] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:41.749 00:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.749 00:53:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:41.749 00:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.749 00:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:41.749 00:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.749 00:53:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:41.749 00:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.749 00:53:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:41.749 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.749 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:41.749 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.749 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:41.749 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.749 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:41.749 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:06:41.749 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.749 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:41.749 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.749 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:41.749 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:41.749 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:41.749 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:41.749 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.749 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:41.749 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:41.749 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:41.749 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.749 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:41.749 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:41.749 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:41.749 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:41.749 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:41.749 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:41.749 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:41.749 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.007 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:42.265 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:42.266 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:42.266 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:42.524 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:42.524 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:42.524 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:42.524 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:42.524 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:42.524 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:42.524 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:42.524 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:42.524 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:42.524 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:42.524 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:42.524 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:42.524 00:53:54 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:42.781 00:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:42.781 00:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:42.781 00:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.781 00:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.781 00:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.781 00:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:42.781 00:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:06:42.781 00:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.781 00:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:42.781 00:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.781 00:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:42.781 00:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:42.782 00:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:42.782 00:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:42.782 00:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:42.782 00:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:42.782 00:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:42.782 00:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:42.782 00:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:42.782 00:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:42.782 00:53:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:06:42.782 00:53:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:42.782 00:53:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:06:42.782 00:53:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:42.782 00:53:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:06:42.782 00:53:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:42.782 00:53:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:42.782 rmmod nvme_tcp 00:06:43.040 rmmod nvme_fabrics 00:06:43.040 rmmod nvme_keyring 00:06:43.040 00:53:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:43.040 00:53:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:06:43.040 00:53:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:06:43.040 00:53:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1154758 ']' 00:06:43.040 00:53:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1154758 00:06:43.040 00:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 1154758 ']' 00:06:43.040 00:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 1154758 00:06:43.040 00:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:06:43.040 00:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:43.040 00:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1154758 00:06:43.040 00:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:43.040 00:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:43.040 00:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1154758' 00:06:43.040 killing process with pid 1154758 00:06:43.040 00:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 1154758 00:06:43.040 [2024-05-15 00:53:55.234319] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:43.040 00:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 1154758 00:06:43.299 00:53:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:43.299 00:53:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:43.299 00:53:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:43.299 00:53:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:43.299 00:53:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:43.299 00:53:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.299 00:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:43.299 00:53:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.209 00:53:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:45.209 00:06:45.209 real 0m6.735s 00:06:45.209 user 0m8.242s 00:06:45.209 sys 0m2.264s 00:06:45.209 00:53:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.210 00:53:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:45.210 ************************************ 00:06:45.210 END TEST nvmf_referrals 00:06:45.210 ************************************ 00:06:45.210 00:53:57 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:45.210 00:53:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:45.210 00:53:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.210 00:53:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:45.468 ************************************ 00:06:45.468 START TEST nvmf_connect_disconnect 00:06:45.468 ************************************ 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:45.468 * Looking for test storage... 00:06:45.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:06:45.468 00:53:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:47.998 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:47.998 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:06:47.998 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:47.999 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:47.999 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:47.999 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:47.999 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:47.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:47.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:06:47.999 00:06:47.999 --- 10.0.0.2 ping statistics --- 00:06:47.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:47.999 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:06:47.999 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:47.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:47.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:06:47.999 00:06:47.999 --- 10.0.0.1 ping statistics --- 00:06:48.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.000 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:06:48.000 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:48.000 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:06:48.000 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:48.000 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:48.000 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:48.000 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:48.000 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:48.000 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:48.000 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:48.000 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:06:48.000 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:48.000 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:48.000 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:48.000 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1157358 00:06:48.000 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:48.000 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1157358 00:06:48.000 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 1157358 ']' 00:06:48.000 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.000 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:48.000 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.000 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:48.000 00:54:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:48.000 [2024-05-15 00:54:00.263899] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:06:48.000 [2024-05-15 00:54:00.264003] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:48.000 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.000 [2024-05-15 00:54:00.347051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:48.258 [2024-05-15 00:54:00.455835] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:48.258 [2024-05-15 00:54:00.455890] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:48.258 [2024-05-15 00:54:00.455904] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:48.258 [2024-05-15 00:54:00.455914] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:48.258 [2024-05-15 00:54:00.455923] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:48.258 [2024-05-15 00:54:00.455995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.258 [2024-05-15 00:54:00.456053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.258 [2024-05-15 00:54:00.456117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.258 [2024-05-15 00:54:00.456120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.191 00:54:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:49.191 00:54:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:06:49.191 00:54:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:49.191 00:54:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:49.191 00:54:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:49.191 00:54:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:49.191 00:54:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:49.191 00:54:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.191 00:54:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:49.191 [2024-05-15 00:54:01.255700] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:49.191 00:54:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.191 00:54:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:06:49.191 00:54:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.192 00:54:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:49.192 00:54:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.192 00:54:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:06:49.192 00:54:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:49.192 00:54:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.192 00:54:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:49.192 00:54:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.192 00:54:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:49.192 00:54:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.192 00:54:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:49.192 00:54:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.192 00:54:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:49.192 00:54:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.192 00:54:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:49.192 [2024-05-15 00:54:01.311996] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:49.192 [2024-05-15 00:54:01.312303] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:49.192 00:54:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.192 00:54:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:06:49.192 00:54:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:06:49.192 00:54:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:06:51.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:54.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:57.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:00.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:02.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:02.630 00:54:14 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:02.630 00:54:14 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:02.630 00:54:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:02.630 00:54:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:07:02.630 00:54:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:02.630 00:54:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:07:02.630 00:54:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:02.630 00:54:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:02.630 rmmod nvme_tcp 00:07:02.630 rmmod nvme_fabrics 00:07:02.630 rmmod nvme_keyring 00:07:02.630 00:54:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:02.630 00:54:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:02.630 00:54:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:02.630 00:54:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1157358 ']' 00:07:02.630 00:54:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1157358 00:07:02.630 00:54:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 1157358 ']' 00:07:02.630 00:54:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 1157358 00:07:02.630 00:54:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:07:02.630 00:54:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:02.630 00:54:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1157358 00:07:02.630 00:54:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:02.630 00:54:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:02.630 00:54:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1157358' 00:07:02.630 killing process with pid 1157358 00:07:02.630 00:54:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 1157358 00:07:02.630 [2024-05-15 00:54:14.993273] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:02.630 00:54:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 1157358 00:07:03.198 00:54:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:03.198 00:54:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:03.198 00:54:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:03.198 00:54:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:03.198 00:54:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:03.198 00:54:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.198 00:54:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:03.199 00:54:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.104 00:54:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:05.104 00:07:05.104 real 0m19.727s 00:07:05.104 user 0m58.778s 00:07:05.104 sys 0m3.615s 00:07:05.104 00:54:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.104 00:54:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:05.104 ************************************ 00:07:05.104 END TEST nvmf_connect_disconnect 00:07:05.104 ************************************ 00:07:05.104 00:54:17 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:05.104 00:54:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:05.104 00:54:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.104 00:54:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:05.104 ************************************ 00:07:05.104 START TEST nvmf_multitarget 00:07:05.104 ************************************ 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:05.104 * Looking for test storage... 00:07:05.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:05.104 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:05.105 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:05.105 00:54:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:05.105 00:54:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:05.105 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:05.105 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.105 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:05.105 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:05.105 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:05.105 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.105 00:54:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:05.105 00:54:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.105 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:05.105 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:05.105 00:54:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:07:05.105 00:54:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:07.636 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:07.636 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:07.636 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.636 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:07.637 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.637 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:07.637 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:07.637 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.637 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:07.637 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:07.637 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.637 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:07.637 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:07:07.637 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:07.637 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:07.637 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:07.637 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:07.637 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:07.637 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:07.637 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:07.637 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:07.637 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:07.637 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:07.637 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:07.637 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:07.637 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:07.637 00:54:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:07.637 00:54:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:07.637 00:54:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:07.895 00:54:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:07.895 00:54:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:07.895 00:54:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:07.895 00:54:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:07.895 00:54:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:07.895 00:54:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:07.895 00:54:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:07.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:07.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:07:07.895 00:07:07.895 --- 10.0.0.2 ping statistics --- 00:07:07.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.895 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:07:07.895 00:54:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:07.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:07.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:07:07.895 00:07:07.895 --- 10.0.0.1 ping statistics --- 00:07:07.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.895 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:07:07.895 00:54:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:07.895 00:54:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:07:07.895 00:54:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:07.895 00:54:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:07.895 00:54:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:07.895 00:54:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:07.895 00:54:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:07.895 00:54:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:07.895 00:54:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:07.895 00:54:20 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:07.895 00:54:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:07.895 00:54:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:07.895 00:54:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:07.895 00:54:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1162132 00:07:07.895 00:54:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:07.895 00:54:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1162132 00:07:07.895 00:54:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 1162132 ']' 00:07:07.895 00:54:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.895 00:54:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:07.895 00:54:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.895 00:54:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:07.895 00:54:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:07.895 [2024-05-15 00:54:20.208871] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:07:07.895 [2024-05-15 00:54:20.208983] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.895 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.152 [2024-05-15 00:54:20.292335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:08.152 [2024-05-15 00:54:20.414781] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:08.152 [2024-05-15 00:54:20.414833] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:08.152 [2024-05-15 00:54:20.414846] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:08.152 [2024-05-15 00:54:20.414857] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:08.152 [2024-05-15 00:54:20.414867] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:08.152 [2024-05-15 00:54:20.414955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.152 [2024-05-15 00:54:20.414994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.152 [2024-05-15 00:54:20.415021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.152 [2024-05-15 00:54:20.415024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.082 00:54:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:09.082 00:54:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:07:09.082 00:54:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:09.082 00:54:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:09.082 00:54:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:09.082 00:54:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:09.082 00:54:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:09.082 00:54:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:09.082 00:54:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:09.082 00:54:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:09.082 00:54:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:09.082 "nvmf_tgt_1" 00:07:09.338 00:54:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:09.338 "nvmf_tgt_2" 00:07:09.339 00:54:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:09.339 00:54:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:09.339 00:54:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:09.339 00:54:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:09.596 true 00:07:09.596 00:54:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:09.596 true 00:07:09.596 00:54:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:09.596 00:54:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:09.854 00:54:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:09.854 00:54:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:09.854 00:54:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:09.854 00:54:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:09.854 00:54:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:09.854 00:54:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:09.854 00:54:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:09.854 00:54:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:09.854 00:54:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:09.854 rmmod nvme_tcp 00:07:09.854 rmmod nvme_fabrics 00:07:09.854 rmmod nvme_keyring 00:07:09.854 00:54:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:09.854 00:54:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:09.854 00:54:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:09.854 00:54:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1162132 ']' 00:07:09.854 00:54:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1162132 00:07:09.854 00:54:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 1162132 ']' 00:07:09.854 00:54:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 1162132 00:07:09.854 00:54:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:07:09.854 00:54:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:09.854 00:54:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1162132 00:07:09.854 00:54:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:09.854 00:54:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:09.854 00:54:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1162132' 00:07:09.854 killing process with pid 1162132 00:07:09.854 00:54:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 1162132 00:07:09.854 00:54:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 1162132 00:07:10.115 00:54:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:10.115 00:54:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:10.115 00:54:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:10.115 00:54:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:10.115 00:54:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:10.115 00:54:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.115 00:54:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:10.115 00:54:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.055 00:54:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:12.055 00:07:12.055 real 0m7.003s 00:07:12.055 user 0m9.584s 00:07:12.055 sys 0m2.289s 00:07:12.055 00:54:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.055 00:54:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:12.055 ************************************ 00:07:12.055 END TEST nvmf_multitarget 00:07:12.055 ************************************ 00:07:12.055 00:54:24 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:12.055 00:54:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:12.055 00:54:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.055 00:54:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:12.314 ************************************ 00:07:12.314 START TEST nvmf_rpc 00:07:12.314 ************************************ 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:12.314 * Looking for test storage... 00:07:12.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:07:12.314 00:54:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:14.846 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:14.846 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:14.847 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:14.847 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:14.847 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:14.847 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:15.105 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:15.105 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:15.105 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:15.105 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:15.105 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:15.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:15.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:07:15.105 00:07:15.105 --- 10.0.0.2 ping statistics --- 00:07:15.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.105 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:07:15.105 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:15.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:15.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:07:15.105 00:07:15.106 --- 10.0.0.1 ping statistics --- 00:07:15.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.106 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:07:15.106 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:15.106 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:07:15.106 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:15.106 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:15.106 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:15.106 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:15.106 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:15.106 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:15.106 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:15.106 00:54:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:15.106 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:15.106 00:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:15.106 00:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.106 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1164660 00:07:15.106 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:15.106 00:54:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1164660 00:07:15.106 00:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 1164660 ']' 00:07:15.106 00:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.106 00:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:15.106 00:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.106 00:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:15.106 00:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.106 [2024-05-15 00:54:27.373294] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:07:15.106 [2024-05-15 00:54:27.373371] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.106 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.106 [2024-05-15 00:54:27.451118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:15.364 [2024-05-15 00:54:27.564266] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.364 [2024-05-15 00:54:27.564323] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.364 [2024-05-15 00:54:27.564336] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:15.364 [2024-05-15 00:54:27.564347] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:15.364 [2024-05-15 00:54:27.564356] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.364 [2024-05-15 00:54:27.564408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.364 [2024-05-15 00:54:27.564465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.364 [2024-05-15 00:54:27.564531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.364 [2024-05-15 00:54:27.564534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.929 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:15.929 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:16.188 00:54:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:16.188 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:16.188 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.188 00:54:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:16.188 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:16.188 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.188 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.188 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.188 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:16.188 "tick_rate": 2700000000, 00:07:16.188 "poll_groups": [ 00:07:16.188 { 00:07:16.188 "name": "nvmf_tgt_poll_group_000", 00:07:16.188 "admin_qpairs": 0, 00:07:16.188 "io_qpairs": 0, 00:07:16.188 "current_admin_qpairs": 0, 00:07:16.188 "current_io_qpairs": 0, 00:07:16.188 "pending_bdev_io": 0, 00:07:16.188 "completed_nvme_io": 0, 00:07:16.188 "transports": [] 00:07:16.188 }, 00:07:16.188 { 00:07:16.188 "name": "nvmf_tgt_poll_group_001", 00:07:16.188 "admin_qpairs": 0, 00:07:16.188 "io_qpairs": 0, 00:07:16.188 "current_admin_qpairs": 0, 00:07:16.188 "current_io_qpairs": 0, 00:07:16.188 "pending_bdev_io": 0, 00:07:16.188 "completed_nvme_io": 0, 00:07:16.188 "transports": [] 00:07:16.188 }, 00:07:16.188 { 00:07:16.188 "name": "nvmf_tgt_poll_group_002", 00:07:16.188 "admin_qpairs": 0, 00:07:16.188 "io_qpairs": 0, 00:07:16.188 "current_admin_qpairs": 0, 00:07:16.188 "current_io_qpairs": 0, 00:07:16.188 "pending_bdev_io": 0, 00:07:16.188 "completed_nvme_io": 0, 00:07:16.188 "transports": [] 00:07:16.188 }, 00:07:16.188 { 00:07:16.188 "name": "nvmf_tgt_poll_group_003", 00:07:16.188 "admin_qpairs": 0, 00:07:16.188 "io_qpairs": 0, 00:07:16.188 "current_admin_qpairs": 0, 00:07:16.188 "current_io_qpairs": 0, 00:07:16.188 "pending_bdev_io": 0, 00:07:16.188 "completed_nvme_io": 0, 00:07:16.188 "transports": [] 00:07:16.188 } 00:07:16.188 ] 00:07:16.188 }' 00:07:16.188 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:16.188 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:16.188 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:16.188 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:16.188 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:16.188 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:16.188 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:16.188 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:16.188 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.188 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.188 [2024-05-15 00:54:28.446942] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.188 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.188 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:16.188 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.188 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.188 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.188 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:16.188 "tick_rate": 2700000000, 00:07:16.188 "poll_groups": [ 00:07:16.188 { 00:07:16.188 "name": "nvmf_tgt_poll_group_000", 00:07:16.188 "admin_qpairs": 0, 00:07:16.188 "io_qpairs": 0, 00:07:16.188 "current_admin_qpairs": 0, 00:07:16.188 "current_io_qpairs": 0, 00:07:16.188 "pending_bdev_io": 0, 00:07:16.188 "completed_nvme_io": 0, 00:07:16.188 "transports": [ 00:07:16.188 { 00:07:16.188 "trtype": "TCP" 00:07:16.188 } 00:07:16.188 ] 00:07:16.188 }, 00:07:16.188 { 00:07:16.188 "name": "nvmf_tgt_poll_group_001", 00:07:16.188 "admin_qpairs": 0, 00:07:16.188 "io_qpairs": 0, 00:07:16.188 "current_admin_qpairs": 0, 00:07:16.188 "current_io_qpairs": 0, 00:07:16.188 "pending_bdev_io": 0, 00:07:16.188 "completed_nvme_io": 0, 00:07:16.188 "transports": [ 00:07:16.188 { 00:07:16.188 "trtype": "TCP" 00:07:16.188 } 00:07:16.188 ] 00:07:16.188 }, 00:07:16.188 { 00:07:16.188 "name": "nvmf_tgt_poll_group_002", 00:07:16.188 "admin_qpairs": 0, 00:07:16.188 "io_qpairs": 0, 00:07:16.188 "current_admin_qpairs": 0, 00:07:16.188 "current_io_qpairs": 0, 00:07:16.188 "pending_bdev_io": 0, 00:07:16.188 "completed_nvme_io": 0, 00:07:16.188 "transports": [ 00:07:16.188 { 00:07:16.188 "trtype": "TCP" 00:07:16.188 } 00:07:16.188 ] 00:07:16.188 }, 00:07:16.188 { 00:07:16.188 "name": "nvmf_tgt_poll_group_003", 00:07:16.189 "admin_qpairs": 0, 00:07:16.189 "io_qpairs": 0, 00:07:16.189 "current_admin_qpairs": 0, 00:07:16.189 "current_io_qpairs": 0, 00:07:16.189 "pending_bdev_io": 0, 00:07:16.189 "completed_nvme_io": 0, 00:07:16.189 "transports": [ 00:07:16.189 { 00:07:16.189 "trtype": "TCP" 00:07:16.189 } 00:07:16.189 ] 00:07:16.189 } 00:07:16.189 ] 00:07:16.189 }' 00:07:16.189 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:16.189 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:16.189 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:16.189 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:16.189 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:16.189 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:16.189 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:16.189 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:16.189 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:16.189 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:16.189 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:16.189 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:16.189 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:16.189 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:16.189 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.189 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.447 Malloc1 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.447 [2024-05-15 00:54:28.613182] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:16.447 [2024-05-15 00:54:28.613487] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:16.447 [2024-05-15 00:54:28.636115] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:07:16.447 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:16.447 could not add new controller: failed to write to nvme-fabrics device 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.447 00:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:17.012 00:54:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:17.012 00:54:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:17.012 00:54:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:17.012 00:54:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:17.012 00:54:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:18.910 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:18.910 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:18.910 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:18.910 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:18.910 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:18.910 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:18.910 00:54:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:19.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:19.168 [2024-05-15 00:54:31.380811] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:07:19.168 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:19.168 could not add new controller: failed to write to nvme-fabrics device 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.168 00:54:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:19.734 00:54:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:19.734 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:19.734 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:19.734 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:19.734 00:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:21.631 00:54:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:21.631 00:54:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:21.631 00:54:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:21.631 00:54:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:21.631 00:54:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:21.631 00:54:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:21.631 00:54:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:21.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:21.889 00:54:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:21.889 00:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:21.889 00:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:21.889 00:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:21.889 00:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:21.889 00:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:21.889 00:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:21.889 00:54:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:21.889 00:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.889 00:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.889 00:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.889 00:54:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:21.889 00:54:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:21.889 00:54:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:21.889 00:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.889 00:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.889 00:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.889 00:54:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:21.889 00:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.889 00:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.889 [2024-05-15 00:54:34.136472] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:21.889 00:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.889 00:54:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:21.889 00:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.889 00:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.889 00:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.889 00:54:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:21.889 00:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.889 00:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.889 00:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.889 00:54:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:22.453 00:54:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:22.453 00:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:22.453 00:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:22.453 00:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:22.453 00:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:24.351 00:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:24.351 00:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:24.351 00:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:24.351 00:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:24.351 00:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:24.351 00:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:24.351 00:54:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:24.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.629 [2024-05-15 00:54:36.872051] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.629 00:54:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:25.192 00:54:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:25.192 00:54:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:25.192 00:54:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:25.192 00:54:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:25.192 00:54:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:27.088 00:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:27.088 00:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:27.088 00:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:27.088 00:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:27.088 00:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:27.088 00:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:27.088 00:54:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:27.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:27.352 00:54:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:27.352 00:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:27.352 00:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:27.352 00:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:27.352 00:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:27.352 00:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:27.352 00:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:27.352 00:54:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:27.352 00:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.352 00:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.352 00:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.353 00:54:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:27.353 00:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.353 00:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.353 00:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.353 00:54:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:27.353 00:54:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:27.353 00:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.353 00:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.353 00:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.353 00:54:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:27.353 00:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.353 00:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.353 [2024-05-15 00:54:39.554460] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.353 00:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.353 00:54:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:27.353 00:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.353 00:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.353 00:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.353 00:54:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:27.353 00:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.353 00:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.353 00:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.353 00:54:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:27.945 00:54:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:27.945 00:54:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:27.945 00:54:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:27.945 00:54:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:27.945 00:54:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:29.842 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:29.842 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:29.842 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:29.842 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:29.842 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:29.842 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:29.842 00:54:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:30.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.101 [2024-05-15 00:54:42.336815] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.101 00:54:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:30.666 00:54:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:30.666 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:30.666 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:30.666 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:30.666 00:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:32.562 00:54:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:32.562 00:54:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:32.562 00:54:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:32.562 00:54:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:32.562 00:54:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:32.562 00:54:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:32.562 00:54:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:32.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:32.819 00:54:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:32.819 00:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:32.819 00:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:32.819 00:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:32.819 00:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:32.819 00:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:32.819 00:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:32.819 00:54:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:32.819 00:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.819 00:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.819 00:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.819 00:54:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:32.819 00:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.819 00:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.819 00:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.819 00:54:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:32.819 00:54:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:32.819 00:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.819 00:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.819 00:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.819 00:54:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:32.819 00:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.819 00:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.819 [2024-05-15 00:54:45.055310] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.819 00:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.819 00:54:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:32.819 00:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.819 00:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.819 00:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.820 00:54:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:32.820 00:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.820 00:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.820 00:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.820 00:54:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:33.384 00:54:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:33.384 00:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:33.384 00:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:33.384 00:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:33.384 00:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:35.909 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:35.909 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:35.909 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:35.909 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:35.909 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:35.909 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:35.909 00:54:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:35.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:35.909 00:54:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:35.909 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:35.909 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:35.909 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.910 [2024-05-15 00:54:47.845967] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.910 [2024-05-15 00:54:47.894048] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.910 [2024-05-15 00:54:47.942206] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.910 [2024-05-15 00:54:47.990389] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.910 00:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.910 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.910 00:54:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:35.910 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.910 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.910 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.910 00:54:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.910 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.910 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.910 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.910 00:54:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:35.910 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.910 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.910 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.910 00:54:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:35.910 00:54:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:35.910 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.910 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.910 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.910 00:54:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:35.910 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.910 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.910 [2024-05-15 00:54:48.038516] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.910 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.910 00:54:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:35.910 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.910 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:07:35.911 "tick_rate": 2700000000, 00:07:35.911 "poll_groups": [ 00:07:35.911 { 00:07:35.911 "name": "nvmf_tgt_poll_group_000", 00:07:35.911 "admin_qpairs": 2, 00:07:35.911 "io_qpairs": 84, 00:07:35.911 "current_admin_qpairs": 0, 00:07:35.911 "current_io_qpairs": 0, 00:07:35.911 "pending_bdev_io": 0, 00:07:35.911 "completed_nvme_io": 184, 00:07:35.911 "transports": [ 00:07:35.911 { 00:07:35.911 "trtype": "TCP" 00:07:35.911 } 00:07:35.911 ] 00:07:35.911 }, 00:07:35.911 { 00:07:35.911 "name": "nvmf_tgt_poll_group_001", 00:07:35.911 "admin_qpairs": 2, 00:07:35.911 "io_qpairs": 84, 00:07:35.911 "current_admin_qpairs": 0, 00:07:35.911 "current_io_qpairs": 0, 00:07:35.911 "pending_bdev_io": 0, 00:07:35.911 "completed_nvme_io": 183, 00:07:35.911 "transports": [ 00:07:35.911 { 00:07:35.911 "trtype": "TCP" 00:07:35.911 } 00:07:35.911 ] 00:07:35.911 }, 00:07:35.911 { 00:07:35.911 "name": "nvmf_tgt_poll_group_002", 00:07:35.911 "admin_qpairs": 1, 00:07:35.911 "io_qpairs": 84, 00:07:35.911 "current_admin_qpairs": 0, 00:07:35.911 "current_io_qpairs": 0, 00:07:35.911 "pending_bdev_io": 0, 00:07:35.911 "completed_nvme_io": 180, 00:07:35.911 "transports": [ 00:07:35.911 { 00:07:35.911 "trtype": "TCP" 00:07:35.911 } 00:07:35.911 ] 00:07:35.911 }, 00:07:35.911 { 00:07:35.911 "name": "nvmf_tgt_poll_group_003", 00:07:35.911 "admin_qpairs": 2, 00:07:35.911 "io_qpairs": 84, 00:07:35.911 "current_admin_qpairs": 0, 00:07:35.911 "current_io_qpairs": 0, 00:07:35.911 "pending_bdev_io": 0, 00:07:35.911 "completed_nvme_io": 139, 00:07:35.911 "transports": [ 00:07:35.911 { 00:07:35.911 "trtype": "TCP" 00:07:35.911 } 00:07:35.911 ] 00:07:35.911 } 00:07:35.911 ] 00:07:35.911 }' 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:35.911 rmmod nvme_tcp 00:07:35.911 rmmod nvme_fabrics 00:07:35.911 rmmod nvme_keyring 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1164660 ']' 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1164660 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 1164660 ']' 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 1164660 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1164660 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1164660' 00:07:35.911 killing process with pid 1164660 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 1164660 00:07:35.911 [2024-05-15 00:54:48.272095] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:35.911 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 1164660 00:07:36.478 00:54:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:36.478 00:54:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:36.479 00:54:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:36.479 00:54:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:36.479 00:54:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:36.479 00:54:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.479 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:36.479 00:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.385 00:54:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:38.385 00:07:38.385 real 0m26.156s 00:07:38.385 user 1m23.411s 00:07:38.385 sys 0m4.395s 00:07:38.385 00:54:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:38.385 00:54:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.385 ************************************ 00:07:38.385 END TEST nvmf_rpc 00:07:38.385 ************************************ 00:07:38.385 00:54:50 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:38.385 00:54:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:38.385 00:54:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:38.385 00:54:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:38.385 ************************************ 00:07:38.385 START TEST nvmf_invalid 00:07:38.385 ************************************ 00:07:38.385 00:54:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:38.385 * Looking for test storage... 00:07:38.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:38.385 00:54:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:38.385 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:07:38.385 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.385 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.385 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.385 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.385 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.385 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.385 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.385 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.385 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.385 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.385 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:38.385 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:38.385 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.385 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.385 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:38.385 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:38.385 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:38.385 00:54:50 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.385 00:54:50 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.385 00:54:50 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.385 00:54:50 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.385 00:54:50 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.386 00:54:50 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.386 00:54:50 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:07:38.386 00:54:50 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.386 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:07:38.386 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:38.386 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:38.386 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:38.386 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.386 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.386 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:38.386 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:38.386 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:38.386 00:54:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:38.386 00:54:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:38.386 00:54:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:38.386 00:54:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:07:38.386 00:54:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:07:38.386 00:54:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:07:38.386 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:38.386 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:38.386 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:38.386 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:38.386 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:38.386 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.386 00:54:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:38.386 00:54:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.386 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:38.386 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:38.386 00:54:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:07:38.386 00:54:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:41.673 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:41.673 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:07:41.673 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:41.673 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:41.673 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:41.673 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:41.673 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:41.673 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:07:41.673 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:41.673 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:07:41.673 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:07:41.673 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:07:41.673 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:07:41.673 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:07:41.673 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:07:41.673 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:41.673 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:41.673 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:41.674 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:41.674 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:41.674 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:41.674 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:41.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:41.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:07:41.674 00:07:41.674 --- 10.0.0.2 ping statistics --- 00:07:41.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.674 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:41.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:41.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:07:41.674 00:07:41.674 --- 10.0.0.1 ping statistics --- 00:07:41.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.674 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1169574 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1169574 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 1169574 ']' 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:41.674 00:54:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:41.674 [2024-05-15 00:54:53.607241] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:07:41.674 [2024-05-15 00:54:53.607337] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.674 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.674 [2024-05-15 00:54:53.689767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:41.674 [2024-05-15 00:54:53.806093] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:41.674 [2024-05-15 00:54:53.806153] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:41.674 [2024-05-15 00:54:53.806181] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:41.674 [2024-05-15 00:54:53.806193] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:41.674 [2024-05-15 00:54:53.806203] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:41.674 [2024-05-15 00:54:53.806257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.674 [2024-05-15 00:54:53.806282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.674 [2024-05-15 00:54:53.806340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.674 [2024-05-15 00:54:53.806344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.239 00:54:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:42.239 00:54:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:07:42.239 00:54:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:42.239 00:54:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:42.239 00:54:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:42.239 00:54:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:42.239 00:54:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:42.239 00:54:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode30182 00:07:42.496 [2024-05-15 00:54:54.843603] nvmf_rpc.c: 391:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:42.496 00:54:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:07:42.496 { 00:07:42.496 "nqn": "nqn.2016-06.io.spdk:cnode30182", 00:07:42.496 "tgt_name": "foobar", 00:07:42.496 "method": "nvmf_create_subsystem", 00:07:42.496 "req_id": 1 00:07:42.496 } 00:07:42.496 Got JSON-RPC error response 00:07:42.496 response: 00:07:42.496 { 00:07:42.496 "code": -32603, 00:07:42.496 "message": "Unable to find target foobar" 00:07:42.496 }' 00:07:42.497 00:54:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:07:42.497 { 00:07:42.497 "nqn": "nqn.2016-06.io.spdk:cnode30182", 00:07:42.497 "tgt_name": "foobar", 00:07:42.497 "method": "nvmf_create_subsystem", 00:07:42.497 "req_id": 1 00:07:42.497 } 00:07:42.497 Got JSON-RPC error response 00:07:42.497 response: 00:07:42.497 { 00:07:42.497 "code": -32603, 00:07:42.497 "message": "Unable to find target foobar" 00:07:42.497 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:42.497 00:54:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:42.497 00:54:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15116 00:07:42.753 [2024-05-15 00:54:55.096424] nvmf_rpc.c: 408:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15116: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:42.753 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:07:42.753 { 00:07:42.753 "nqn": "nqn.2016-06.io.spdk:cnode15116", 00:07:42.753 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:42.753 "method": "nvmf_create_subsystem", 00:07:42.753 "req_id": 1 00:07:42.753 } 00:07:42.753 Got JSON-RPC error response 00:07:42.753 response: 00:07:42.753 { 00:07:42.753 "code": -32602, 00:07:42.753 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:42.753 }' 00:07:42.753 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:07:42.753 { 00:07:42.753 "nqn": "nqn.2016-06.io.spdk:cnode15116", 00:07:42.753 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:42.753 "method": "nvmf_create_subsystem", 00:07:42.753 "req_id": 1 00:07:42.753 } 00:07:42.754 Got JSON-RPC error response 00:07:42.754 response: 00:07:42.754 { 00:07:42.754 "code": -32602, 00:07:42.754 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:42.754 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:42.754 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:42.754 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode10047 00:07:43.011 [2024-05-15 00:54:55.333153] nvmf_rpc.c: 417:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10047: invalid model number 'SPDK_Controller' 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:07:43.011 { 00:07:43.011 "nqn": "nqn.2016-06.io.spdk:cnode10047", 00:07:43.011 "model_number": "SPDK_Controller\u001f", 00:07:43.011 "method": "nvmf_create_subsystem", 00:07:43.011 "req_id": 1 00:07:43.011 } 00:07:43.011 Got JSON-RPC error response 00:07:43.011 response: 00:07:43.011 { 00:07:43.011 "code": -32602, 00:07:43.011 "message": "Invalid MN SPDK_Controller\u001f" 00:07:43.011 }' 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:07:43.011 { 00:07:43.011 "nqn": "nqn.2016-06.io.spdk:cnode10047", 00:07:43.011 "model_number": "SPDK_Controller\u001f", 00:07:43.011 "method": "nvmf_create_subsystem", 00:07:43.011 "req_id": 1 00:07:43.011 } 00:07:43.011 Got JSON-RPC error response 00:07:43.011 response: 00:07:43.011 { 00:07:43.011 "code": -32602, 00:07:43.011 "message": "Invalid MN SPDK_Controller\u001f" 00:07:43.011 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.011 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ Y == \- ]] 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'YUub;_(eWtakTsQKQ<@J' 00:07:43.268 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'YUub;_(eWtakTsQKQ<@J' nqn.2016-06.io.spdk:cnode16466 00:07:43.268 [2024-05-15 00:54:55.650226] nvmf_rpc.c: 408:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16466: invalid serial number 'YUub;_(eWtakTsQKQ<@J' 00:07:43.525 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:07:43.525 { 00:07:43.526 "nqn": "nqn.2016-06.io.spdk:cnode16466", 00:07:43.526 "serial_number": "YUub;_(eWtakT\u007fsQKQ<@J", 00:07:43.526 "method": "nvmf_create_subsystem", 00:07:43.526 "req_id": 1 00:07:43.526 } 00:07:43.526 Got JSON-RPC error response 00:07:43.526 response: 00:07:43.526 { 00:07:43.526 "code": -32602, 00:07:43.526 "message": "Invalid SN YUub;_(eWtakT\u007fsQKQ<@J" 00:07:43.526 }' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:07:43.526 { 00:07:43.526 "nqn": "nqn.2016-06.io.spdk:cnode16466", 00:07:43.526 "serial_number": "YUub;_(eWtakT\u007fsQKQ<@J", 00:07:43.526 "method": "nvmf_create_subsystem", 00:07:43.526 "req_id": 1 00:07:43.526 } 00:07:43.526 Got JSON-RPC error response 00:07:43.526 response: 00:07:43.526 { 00:07:43.526 "code": -32602, 00:07:43.526 "message": "Invalid SN YUub;_(eWtakT\u007fsQKQ<@J" 00:07:43.526 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.526 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[  == \- ]] 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '_}gjO'\''~o#np(b8-#A@>)Mb14_w2B/KZUy[EGp#62' 00:07:43.527 00:54:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '_}gjO'\''~o#np(b8-#A@>)Mb14_w2B/KZUy[EGp#62' nqn.2016-06.io.spdk:cnode12531 00:07:43.784 [2024-05-15 00:54:56.023452] nvmf_rpc.c: 417:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12531: invalid model number '_}gjO'~o#np(b8-#A@>)Mb14_w2B/KZUy[EGp#62' 00:07:43.784 00:54:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:07:43.784 { 00:07:43.784 "nqn": "nqn.2016-06.io.spdk:cnode12531", 00:07:43.784 "model_number": "\u007f_}gjO'\''~o#np(b8-#A@>)Mb14_w2B/KZUy[EGp#62", 00:07:43.784 "method": "nvmf_create_subsystem", 00:07:43.784 "req_id": 1 00:07:43.784 } 00:07:43.784 Got JSON-RPC error response 00:07:43.784 response: 00:07:43.784 { 00:07:43.784 "code": -32602, 00:07:43.784 "message": "Invalid MN \u007f_}gjO'\''~o#np(b8-#A@>)Mb14_w2B/KZUy[EGp#62" 00:07:43.784 }' 00:07:43.784 00:54:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:07:43.784 { 00:07:43.784 "nqn": "nqn.2016-06.io.spdk:cnode12531", 00:07:43.784 "model_number": "\u007f_}gjO'~o#np(b8-#A@>)Mb14_w2B/KZUy[EGp#62", 00:07:43.784 "method": "nvmf_create_subsystem", 00:07:43.784 "req_id": 1 00:07:43.784 } 00:07:43.784 Got JSON-RPC error response 00:07:43.784 response: 00:07:43.784 { 00:07:43.784 "code": -32602, 00:07:43.784 "message": "Invalid MN \u007f_}gjO'~o#np(b8-#A@>)Mb14_w2B/KZUy[EGp#62" 00:07:43.784 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:43.784 00:54:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:07:44.041 [2024-05-15 00:54:56.272398] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.041 00:54:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:07:44.300 00:54:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:07:44.300 00:54:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:07:44.300 00:54:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:07:44.300 00:54:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:07:44.300 00:54:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:07:44.586 [2024-05-15 00:54:56.765967] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:44.586 [2024-05-15 00:54:56.766065] nvmf_rpc.c: 789:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:07:44.586 00:54:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:07:44.586 { 00:07:44.586 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:44.586 "listen_address": { 00:07:44.586 "trtype": "tcp", 00:07:44.586 "traddr": "", 00:07:44.586 "trsvcid": "4421" 00:07:44.586 }, 00:07:44.586 "method": "nvmf_subsystem_remove_listener", 00:07:44.586 "req_id": 1 00:07:44.586 } 00:07:44.586 Got JSON-RPC error response 00:07:44.586 response: 00:07:44.586 { 00:07:44.586 "code": -32602, 00:07:44.586 "message": "Invalid parameters" 00:07:44.586 }' 00:07:44.586 00:54:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:07:44.586 { 00:07:44.586 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:44.586 "listen_address": { 00:07:44.586 "trtype": "tcp", 00:07:44.586 "traddr": "", 00:07:44.586 "trsvcid": "4421" 00:07:44.586 }, 00:07:44.586 "method": "nvmf_subsystem_remove_listener", 00:07:44.586 "req_id": 1 00:07:44.586 } 00:07:44.586 Got JSON-RPC error response 00:07:44.586 response: 00:07:44.586 { 00:07:44.586 "code": -32602, 00:07:44.586 "message": "Invalid parameters" 00:07:44.586 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:07:44.586 00:54:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6708 -i 0 00:07:44.843 [2024-05-15 00:54:57.010802] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6708: invalid cntlid range [0-65519] 00:07:44.843 00:54:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:07:44.843 { 00:07:44.843 "nqn": "nqn.2016-06.io.spdk:cnode6708", 00:07:44.843 "min_cntlid": 0, 00:07:44.843 "method": "nvmf_create_subsystem", 00:07:44.843 "req_id": 1 00:07:44.843 } 00:07:44.843 Got JSON-RPC error response 00:07:44.843 response: 00:07:44.843 { 00:07:44.843 "code": -32602, 00:07:44.843 "message": "Invalid cntlid range [0-65519]" 00:07:44.843 }' 00:07:44.843 00:54:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:07:44.843 { 00:07:44.843 "nqn": "nqn.2016-06.io.spdk:cnode6708", 00:07:44.843 "min_cntlid": 0, 00:07:44.843 "method": "nvmf_create_subsystem", 00:07:44.843 "req_id": 1 00:07:44.843 } 00:07:44.843 Got JSON-RPC error response 00:07:44.843 response: 00:07:44.843 { 00:07:44.843 "code": -32602, 00:07:44.843 "message": "Invalid cntlid range [0-65519]" 00:07:44.843 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:44.843 00:54:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30861 -i 65520 00:07:45.101 [2024-05-15 00:54:57.255595] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30861: invalid cntlid range [65520-65519] 00:07:45.101 00:54:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:07:45.101 { 00:07:45.101 "nqn": "nqn.2016-06.io.spdk:cnode30861", 00:07:45.101 "min_cntlid": 65520, 00:07:45.101 "method": "nvmf_create_subsystem", 00:07:45.101 "req_id": 1 00:07:45.101 } 00:07:45.101 Got JSON-RPC error response 00:07:45.101 response: 00:07:45.101 { 00:07:45.101 "code": -32602, 00:07:45.101 "message": "Invalid cntlid range [65520-65519]" 00:07:45.101 }' 00:07:45.101 00:54:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:07:45.101 { 00:07:45.101 "nqn": "nqn.2016-06.io.spdk:cnode30861", 00:07:45.101 "min_cntlid": 65520, 00:07:45.101 "method": "nvmf_create_subsystem", 00:07:45.101 "req_id": 1 00:07:45.101 } 00:07:45.101 Got JSON-RPC error response 00:07:45.101 response: 00:07:45.101 { 00:07:45.101 "code": -32602, 00:07:45.101 "message": "Invalid cntlid range [65520-65519]" 00:07:45.101 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:45.101 00:54:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21431 -I 0 00:07:45.358 [2024-05-15 00:54:57.496398] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21431: invalid cntlid range [1-0] 00:07:45.358 00:54:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:07:45.358 { 00:07:45.358 "nqn": "nqn.2016-06.io.spdk:cnode21431", 00:07:45.358 "max_cntlid": 0, 00:07:45.358 "method": "nvmf_create_subsystem", 00:07:45.358 "req_id": 1 00:07:45.358 } 00:07:45.358 Got JSON-RPC error response 00:07:45.358 response: 00:07:45.358 { 00:07:45.358 "code": -32602, 00:07:45.358 "message": "Invalid cntlid range [1-0]" 00:07:45.358 }' 00:07:45.358 00:54:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:07:45.358 { 00:07:45.358 "nqn": "nqn.2016-06.io.spdk:cnode21431", 00:07:45.358 "max_cntlid": 0, 00:07:45.358 "method": "nvmf_create_subsystem", 00:07:45.358 "req_id": 1 00:07:45.358 } 00:07:45.358 Got JSON-RPC error response 00:07:45.358 response: 00:07:45.358 { 00:07:45.358 "code": -32602, 00:07:45.358 "message": "Invalid cntlid range [1-0]" 00:07:45.358 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:45.358 00:54:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21948 -I 65520 00:07:45.615 [2024-05-15 00:54:57.749291] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21948: invalid cntlid range [1-65520] 00:07:45.615 00:54:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:07:45.615 { 00:07:45.615 "nqn": "nqn.2016-06.io.spdk:cnode21948", 00:07:45.615 "max_cntlid": 65520, 00:07:45.615 "method": "nvmf_create_subsystem", 00:07:45.615 "req_id": 1 00:07:45.615 } 00:07:45.615 Got JSON-RPC error response 00:07:45.615 response: 00:07:45.615 { 00:07:45.615 "code": -32602, 00:07:45.615 "message": "Invalid cntlid range [1-65520]" 00:07:45.615 }' 00:07:45.615 00:54:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:07:45.615 { 00:07:45.615 "nqn": "nqn.2016-06.io.spdk:cnode21948", 00:07:45.615 "max_cntlid": 65520, 00:07:45.615 "method": "nvmf_create_subsystem", 00:07:45.615 "req_id": 1 00:07:45.615 } 00:07:45.615 Got JSON-RPC error response 00:07:45.615 response: 00:07:45.615 { 00:07:45.615 "code": -32602, 00:07:45.615 "message": "Invalid cntlid range [1-65520]" 00:07:45.615 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:45.615 00:54:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29091 -i 6 -I 5 00:07:45.615 [2024-05-15 00:54:57.994101] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29091: invalid cntlid range [6-5] 00:07:45.873 00:54:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:07:45.873 { 00:07:45.873 "nqn": "nqn.2016-06.io.spdk:cnode29091", 00:07:45.873 "min_cntlid": 6, 00:07:45.873 "max_cntlid": 5, 00:07:45.873 "method": "nvmf_create_subsystem", 00:07:45.873 "req_id": 1 00:07:45.873 } 00:07:45.873 Got JSON-RPC error response 00:07:45.873 response: 00:07:45.873 { 00:07:45.873 "code": -32602, 00:07:45.873 "message": "Invalid cntlid range [6-5]" 00:07:45.873 }' 00:07:45.873 00:54:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:07:45.873 { 00:07:45.873 "nqn": "nqn.2016-06.io.spdk:cnode29091", 00:07:45.873 "min_cntlid": 6, 00:07:45.873 "max_cntlid": 5, 00:07:45.873 "method": "nvmf_create_subsystem", 00:07:45.873 "req_id": 1 00:07:45.873 } 00:07:45.873 Got JSON-RPC error response 00:07:45.873 response: 00:07:45.873 { 00:07:45.873 "code": -32602, 00:07:45.873 "message": "Invalid cntlid range [6-5]" 00:07:45.873 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:45.873 00:54:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:07:45.873 00:54:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:07:45.873 { 00:07:45.873 "name": "foobar", 00:07:45.873 "method": "nvmf_delete_target", 00:07:45.873 "req_id": 1 00:07:45.873 } 00:07:45.873 Got JSON-RPC error response 00:07:45.873 response: 00:07:45.873 { 00:07:45.874 "code": -32602, 00:07:45.874 "message": "The specified target doesn'\''t exist, cannot delete it." 00:07:45.874 }' 00:07:45.874 00:54:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:07:45.874 { 00:07:45.874 "name": "foobar", 00:07:45.874 "method": "nvmf_delete_target", 00:07:45.874 "req_id": 1 00:07:45.874 } 00:07:45.874 Got JSON-RPC error response 00:07:45.874 response: 00:07:45.874 { 00:07:45.874 "code": -32602, 00:07:45.874 "message": "The specified target doesn't exist, cannot delete it." 00:07:45.874 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:07:45.874 00:54:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:07:45.874 00:54:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:07:45.874 00:54:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:45.874 00:54:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:07:45.874 00:54:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:45.874 00:54:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:07:45.874 00:54:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:45.874 00:54:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:45.874 rmmod nvme_tcp 00:07:45.874 rmmod nvme_fabrics 00:07:45.874 rmmod nvme_keyring 00:07:45.874 00:54:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:45.874 00:54:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:07:45.874 00:54:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:07:45.874 00:54:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1169574 ']' 00:07:45.874 00:54:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1169574 00:07:45.874 00:54:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 1169574 ']' 00:07:45.874 00:54:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 1169574 00:07:45.874 00:54:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:07:45.874 00:54:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:45.874 00:54:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1169574 00:07:45.874 00:54:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:45.874 00:54:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:45.874 00:54:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1169574' 00:07:45.874 killing process with pid 1169574 00:07:45.874 00:54:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 1169574 00:07:45.874 [2024-05-15 00:54:58.197022] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:45.874 00:54:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 1169574 00:07:46.132 00:54:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:46.132 00:54:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:46.132 00:54:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:46.132 00:54:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:46.132 00:54:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:46.132 00:54:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.132 00:54:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:46.132 00:54:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.665 00:55:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:48.665 00:07:48.665 real 0m9.840s 00:07:48.665 user 0m22.523s 00:07:48.665 sys 0m2.860s 00:07:48.665 00:55:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:48.665 00:55:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:48.665 ************************************ 00:07:48.665 END TEST nvmf_invalid 00:07:48.665 ************************************ 00:07:48.665 00:55:00 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:48.665 00:55:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:48.665 00:55:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:48.665 00:55:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:48.665 ************************************ 00:07:48.665 START TEST nvmf_abort 00:07:48.665 ************************************ 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:48.665 * Looking for test storage... 00:07:48.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.665 00:55:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.666 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:48.666 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:48.666 00:55:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:48.666 00:55:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:51.199 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:51.199 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:51.199 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:51.199 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:51.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:51.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:07:51.199 00:07:51.199 --- 10.0.0.2 ping statistics --- 00:07:51.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.199 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:07:51.199 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:51.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:51.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:07:51.199 00:07:51.199 --- 10.0.0.1 ping statistics --- 00:07:51.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.200 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:07:51.200 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:51.200 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:51.200 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:51.200 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:51.200 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:51.200 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:51.200 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:51.200 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:51.200 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:51.200 00:55:03 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:51.200 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:51.200 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:51.200 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:51.200 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1172553 00:07:51.200 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:51.200 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1172553 00:07:51.200 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 1172553 ']' 00:07:51.200 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.200 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:51.200 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.200 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:51.200 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:51.200 [2024-05-15 00:55:03.348422] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:07:51.200 [2024-05-15 00:55:03.348518] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.200 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.200 [2024-05-15 00:55:03.424110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:51.200 [2024-05-15 00:55:03.533853] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.200 [2024-05-15 00:55:03.533909] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.200 [2024-05-15 00:55:03.533944] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.200 [2024-05-15 00:55:03.533957] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.200 [2024-05-15 00:55:03.533967] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.200 [2024-05-15 00:55:03.534104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.200 [2024-05-15 00:55:03.534173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.200 [2024-05-15 00:55:03.534176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:51.458 [2024-05-15 00:55:03.677727] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:51.458 Malloc0 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:51.458 Delay0 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:51.458 [2024-05-15 00:55:03.742028] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:51.458 [2024-05-15 00:55:03.742367] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.458 00:55:03 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:51.458 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.458 [2024-05-15 00:55:03.838308] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:53.982 Initializing NVMe Controllers 00:07:53.982 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:53.982 controller IO queue size 128 less than required 00:07:53.982 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:53.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:53.982 Initialization complete. Launching workers. 00:07:53.982 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 30411 00:07:53.982 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 30472, failed to submit 62 00:07:53.982 success 30415, unsuccess 57, failed 0 00:07:53.982 00:55:05 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:53.982 00:55:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.982 00:55:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.982 00:55:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.982 00:55:05 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:53.982 00:55:05 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:53.982 00:55:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:53.982 00:55:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:53.982 00:55:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:53.982 00:55:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:53.982 00:55:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:53.982 00:55:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:53.982 rmmod nvme_tcp 00:07:53.982 rmmod nvme_fabrics 00:07:53.982 rmmod nvme_keyring 00:07:53.982 00:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:53.982 00:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:53.982 00:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:53.982 00:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1172553 ']' 00:07:53.982 00:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1172553 00:07:53.982 00:55:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 1172553 ']' 00:07:53.982 00:55:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 1172553 00:07:53.982 00:55:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:07:53.982 00:55:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:53.982 00:55:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1172553 00:07:53.982 00:55:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:07:53.982 00:55:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:07:53.982 00:55:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1172553' 00:07:53.982 killing process with pid 1172553 00:07:53.982 00:55:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 1172553 00:07:53.982 [2024-05-15 00:55:06.048480] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:53.982 00:55:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 1172553 00:07:53.982 00:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:53.982 00:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:53.982 00:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:53.982 00:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:53.982 00:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:53.982 00:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.982 00:55:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.982 00:55:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.514 00:55:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:56.514 00:07:56.514 real 0m7.814s 00:07:56.514 user 0m10.732s 00:07:56.514 sys 0m2.923s 00:07:56.514 00:55:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:56.514 00:55:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:56.514 ************************************ 00:07:56.514 END TEST nvmf_abort 00:07:56.514 ************************************ 00:07:56.514 00:55:08 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:56.514 00:55:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:56.514 00:55:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:56.514 00:55:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:56.514 ************************************ 00:07:56.514 START TEST nvmf_ns_hotplug_stress 00:07:56.514 ************************************ 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:56.514 * Looking for test storage... 00:07:56.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:56.514 00:55:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:59.042 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:59.042 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:59.042 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:59.042 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:59.043 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:59.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:59.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:07:59.043 00:07:59.043 --- 10.0.0.2 ping statistics --- 00:07:59.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.043 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:59.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:59.043 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:07:59.043 00:07:59.043 --- 10.0.0.1 ping statistics --- 00:07:59.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.043 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1175213 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1175213 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 1175213 ']' 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:59.043 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:59.043 [2024-05-15 00:55:11.344096] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:07:59.043 [2024-05-15 00:55:11.344175] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.043 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.043 [2024-05-15 00:55:11.420813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:59.301 [2024-05-15 00:55:11.529276] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:59.301 [2024-05-15 00:55:11.529330] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:59.301 [2024-05-15 00:55:11.529359] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:59.301 [2024-05-15 00:55:11.529370] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:59.301 [2024-05-15 00:55:11.529379] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:59.301 [2024-05-15 00:55:11.529469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.301 [2024-05-15 00:55:11.529531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:59.301 [2024-05-15 00:55:11.529535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.301 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:59.301 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:07:59.301 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:59.301 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:59.301 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:59.301 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:59.301 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:59.301 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:59.559 [2024-05-15 00:55:11.891716] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.559 00:55:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:59.816 00:55:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.073 [2024-05-15 00:55:12.434326] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:00.073 [2024-05-15 00:55:12.434568] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.073 00:55:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:00.638 00:55:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:00.638 Malloc0 00:08:00.896 00:55:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:00.896 Delay0 00:08:00.896 00:55:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.462 00:55:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:01.462 NULL1 00:08:01.462 00:55:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:01.719 00:55:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1175566 00:08:01.719 00:55:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:01.719 00:55:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1175566 00:08:02.005 00:55:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.005 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.005 00:55:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.263 00:55:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:02.263 00:55:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:02.520 true 00:08:02.520 00:55:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1175566 00:08:02.520 00:55:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.777 00:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.035 00:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:03.035 00:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:03.292 true 00:08:03.549 00:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1175566 00:08:03.549 00:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.807 00:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.065 00:55:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:04.065 00:55:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:04.323 true 00:08:04.323 00:55:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1175566 00:08:04.323 00:55:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.256 Read completed with error (sct=0, sc=11) 00:08:05.256 00:55:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.513 00:55:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:05.513 00:55:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:05.770 true 00:08:05.770 00:55:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1175566 00:08:05.770 00:55:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.028 00:55:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.285 00:55:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:06.285 00:55:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:06.543 true 00:08:06.543 00:55:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1175566 00:08:06.543 00:55:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.475 00:55:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.475 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.475 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.475 00:55:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:07.475 00:55:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:07.732 true 00:08:07.732 00:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1175566 00:08:07.732 00:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.296 00:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.296 00:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:08.296 00:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:08.556 true 00:08:08.556 00:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1175566 00:08:08.556 00:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.491 00:55:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.748 00:55:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:09.748 00:55:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:10.006 true 00:08:10.006 00:55:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1175566 00:08:10.006 00:55:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.263 00:55:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.522 00:55:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:10.522 00:55:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:10.780 true 00:08:10.780 00:55:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1175566 00:08:10.780 00:55:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.712 00:55:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.712 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.712 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.712 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.969 00:55:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:11.969 00:55:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:11.969 true 00:08:11.969 00:55:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1175566 00:08:11.969 00:55:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.226 00:55:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.485 00:55:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:12.485 00:55:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:12.743 true 00:08:12.743 00:55:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1175566 00:08:12.743 00:55:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.677 00:55:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.934 00:55:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:13.935 00:55:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:14.192 true 00:08:14.192 00:55:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1175566 00:08:14.192 00:55:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.450 00:55:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.708 00:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:14.708 00:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:14.965 true 00:08:14.965 00:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1175566 00:08:14.965 00:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.938 00:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.938 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:16.196 00:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:16.196 00:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:16.453 true 00:08:16.453 00:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1175566 00:08:16.453 00:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.710 00:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.967 00:55:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:16.967 00:55:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:17.227 true 00:08:17.227 00:55:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1175566 00:08:17.227 00:55:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.483 00:55:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.739 00:55:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:17.739 00:55:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:17.997 true 00:08:17.997 00:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1175566 00:08:17.997 00:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.931 00:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.188 00:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:19.188 00:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:19.445 true 00:08:19.445 00:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1175566 00:08:19.445 00:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.702 00:55:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.959 00:55:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:19.959 00:55:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:20.217 true 00:08:20.217 00:55:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1175566 00:08:20.217 00:55:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.474 00:55:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.731 00:55:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:20.731 00:55:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:20.990 true 00:08:20.990 00:55:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1175566 00:08:20.990 00:55:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.940 00:55:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.198 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.198 00:55:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:22.198 00:55:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:22.455 true 00:08:22.455 00:55:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1175566 00:08:22.455 00:55:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.713 00:55:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.971 00:55:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:22.971 00:55:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:23.228 true 00:08:23.228 00:55:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1175566 00:08:23.228 00:55:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.160 00:55:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.160 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:24.160 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:24.417 00:55:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:24.417 00:55:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:24.675 true 00:08:24.675 00:55:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1175566 00:08:24.675 00:55:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.933 00:55:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.192 00:55:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:25.192 00:55:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:25.453 true 00:08:25.453 00:55:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1175566 00:08:25.453 00:55:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.389 00:55:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.646 00:55:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:26.646 00:55:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:26.904 true 00:08:26.904 00:55:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1175566 00:08:26.904 00:55:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.162 00:55:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.420 00:55:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:27.420 00:55:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:27.677 true 00:08:27.677 00:55:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1175566 00:08:27.677 00:55:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.935 00:55:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.197 00:55:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:28.197 00:55:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:28.496 true 00:08:28.496 00:55:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1175566 00:08:28.496 00:55:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.431 00:55:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.431 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.688 00:55:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:29.688 00:55:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:29.688 true 00:08:29.945 00:55:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1175566 00:08:29.945 00:55:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.945 00:55:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.202 00:55:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:30.202 00:55:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:30.459 true 00:08:30.459 00:55:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1175566 00:08:30.459 00:55:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.833 00:55:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.833 00:55:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:31.833 00:55:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:32.090 true 00:08:32.090 00:55:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1175566 00:08:32.090 00:55:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.348 Initializing NVMe Controllers 00:08:32.349 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:32.349 Controller IO queue size 128, less than required. 00:08:32.349 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:32.349 Controller IO queue size 128, less than required. 00:08:32.349 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:32.349 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:32.349 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:32.349 Initialization complete. Launching workers. 00:08:32.349 ======================================================== 00:08:32.349 Latency(us) 00:08:32.349 Device Information : IOPS MiB/s Average min max 00:08:32.349 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 510.38 0.25 111386.71 2486.68 1077263.35 00:08:32.349 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9285.11 4.53 13746.04 2900.78 447004.94 00:08:32.349 ======================================================== 00:08:32.349 Total : 9795.49 4.78 18833.49 2486.68 1077263.35 00:08:32.349 00:08:32.349 00:55:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.606 00:55:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:08:32.606 00:55:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:08:32.864 true 00:08:32.864 00:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1175566 00:08:32.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1175566) - No such process 00:08:32.864 00:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1175566 00:08:32.864 00:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.123 00:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:33.381 00:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:33.381 00:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:33.381 00:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:33.381 00:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:33.381 00:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:33.639 null0 00:08:33.639 00:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:33.639 00:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:33.639 00:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:33.896 null1 00:08:33.896 00:55:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:33.896 00:55:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:33.896 00:55:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:34.154 null2 00:08:34.154 00:55:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.154 00:55:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.154 00:55:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:34.154 null3 00:08:34.154 00:55:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.154 00:55:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.154 00:55:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:34.412 null4 00:08:34.412 00:55:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.412 00:55:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.412 00:55:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:34.670 null5 00:08:34.670 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.670 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.670 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:34.928 null6 00:08:34.928 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.928 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.928 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:35.186 null7 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1179625 1179626 1179628 1179630 1179632 1179634 1179636 1179638 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.186 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:35.445 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:35.445 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:35.445 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:35.445 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:35.445 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:35.445 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.445 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:35.445 00:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:35.703 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.703 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.703 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:35.703 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.703 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.703 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:35.703 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.703 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.703 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:35.703 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.703 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.704 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:35.704 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.704 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.704 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:35.704 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.704 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.704 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:35.704 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.704 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.704 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:35.962 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.962 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.962 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:35.962 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:35.962 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:35.962 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:35.962 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:35.962 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:36.221 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:36.221 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.221 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:36.221 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.221 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.221 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:36.221 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.221 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.221 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:36.479 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.479 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.480 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:36.480 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.480 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.480 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:36.480 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.480 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.480 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.480 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:36.480 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.480 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:36.480 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.480 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.480 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:36.480 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.480 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.480 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:36.738 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:36.738 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:36.738 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:36.738 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:36.738 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:36.738 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:36.738 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.738 00:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:36.996 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.996 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.996 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:36.996 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.996 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.996 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:36.996 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.996 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.996 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:36.996 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.996 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.996 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:36.996 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.996 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.996 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:36.996 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.996 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.996 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:36.996 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.996 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.996 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:36.996 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.996 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.996 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:37.254 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:37.254 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:37.254 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.255 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:37.255 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:37.255 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:37.255 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.255 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:37.514 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.514 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.514 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:37.514 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.514 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.514 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:37.514 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.514 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.514 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:37.514 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.514 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.514 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:37.514 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.514 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.514 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:37.514 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.514 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.514 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:37.514 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.514 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.514 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:37.514 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.514 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.514 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:37.772 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:37.772 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:37.772 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.772 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.772 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:37.772 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:37.772 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:37.772 00:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:38.031 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.031 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.031 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:38.031 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.031 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.031 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:38.031 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.031 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.031 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:38.031 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.031 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.031 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:38.031 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.031 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.031 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:38.031 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.031 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.031 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.031 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:38.031 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.031 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:38.031 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.031 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.031 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:38.289 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:38.289 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:38.289 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:38.289 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.289 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:38.289 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:38.289 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:38.289 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:38.548 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.548 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.548 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:38.548 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.548 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.548 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:38.548 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.548 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.548 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:38.548 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.548 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.548 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:38.548 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.548 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.548 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:38.548 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.548 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.548 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:38.548 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.548 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.548 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:38.548 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.548 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.548 00:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:38.807 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:38.807 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:38.807 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:38.807 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:38.807 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.807 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:38.807 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:38.807 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:39.065 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.065 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.065 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:39.065 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.065 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.065 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:39.065 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.065 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.065 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:39.065 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.065 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.065 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:39.065 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.065 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.065 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:39.065 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.065 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.065 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:39.065 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.065 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.065 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:39.065 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.065 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.065 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:39.324 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:39.324 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:39.324 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:39.324 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:39.324 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.324 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:39.324 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:39.324 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:39.582 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.582 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.582 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:39.582 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.582 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.582 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:39.582 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.582 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.582 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:39.582 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.582 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.582 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:39.582 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.582 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.582 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:39.582 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.582 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.582 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:39.582 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.582 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.582 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:39.582 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.582 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.582 00:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:39.841 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:39.841 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:39.841 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:39.841 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:39.841 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:39.841 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:39.841 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:39.841 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.099 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.099 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.099 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:40.099 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.099 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.099 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:40.099 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.099 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.099 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:40.099 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.099 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.099 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:40.099 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.099 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.099 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:40.099 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.099 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.099 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:40.099 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.099 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.099 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:40.099 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.099 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.099 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:40.357 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:40.357 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:40.357 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:40.357 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:40.357 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:40.357 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.357 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:40.357 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:40.616 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.616 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.616 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.616 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.616 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.616 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.616 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.616 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.616 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.616 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.616 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.616 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.616 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.616 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.616 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.616 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.616 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:40.616 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:40.616 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:40.616 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:40.616 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:40.616 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:40.616 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:40.616 00:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:40.616 rmmod nvme_tcp 00:08:40.616 rmmod nvme_fabrics 00:08:40.874 rmmod nvme_keyring 00:08:40.874 00:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:40.874 00:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:40.874 00:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:40.874 00:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1175213 ']' 00:08:40.874 00:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1175213 00:08:40.874 00:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 1175213 ']' 00:08:40.874 00:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 1175213 00:08:40.874 00:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:08:40.874 00:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:40.874 00:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1175213 00:08:40.875 00:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:08:40.875 00:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:08:40.875 00:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1175213' 00:08:40.875 killing process with pid 1175213 00:08:40.875 00:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 1175213 00:08:40.875 [2024-05-15 00:55:53.067618] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:40.875 00:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 1175213 00:08:41.144 00:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:41.144 00:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:41.144 00:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:41.144 00:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:41.144 00:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:41.144 00:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.144 00:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:41.144 00:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.121 00:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:43.121 00:08:43.121 real 0m46.936s 00:08:43.121 user 3m31.798s 00:08:43.121 sys 0m16.219s 00:08:43.121 00:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:43.121 00:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.121 ************************************ 00:08:43.121 END TEST nvmf_ns_hotplug_stress 00:08:43.121 ************************************ 00:08:43.121 00:55:55 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:43.121 00:55:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:43.121 00:55:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:43.121 00:55:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:43.121 ************************************ 00:08:43.121 START TEST nvmf_connect_stress 00:08:43.121 ************************************ 00:08:43.121 00:55:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:43.121 * Looking for test storage... 00:08:43.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:43.121 00:55:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:43.121 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:08:43.121 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.121 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.121 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.121 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.121 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.121 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.121 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.121 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.121 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.121 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.121 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:43.121 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:43.121 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.121 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.121 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:43.121 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.121 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:43.121 00:55:55 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.121 00:55:55 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.380 00:55:55 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.380 00:55:55 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.380 00:55:55 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.380 00:55:55 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.380 00:55:55 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:08:43.380 00:55:55 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.381 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:08:43.381 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:43.381 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:43.381 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.381 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.381 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.381 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:43.381 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:43.381 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:43.381 00:55:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:43.381 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:43.381 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.381 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:43.381 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:43.381 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:43.381 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.381 00:55:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:43.381 00:55:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.381 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:43.381 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:43.381 00:55:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:43.381 00:55:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.918 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:45.918 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:45.918 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:45.918 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:45.918 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:45.918 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:45.919 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:45.919 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:45.919 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:45.919 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:45.919 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:45.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:08:45.919 00:08:45.919 --- 10.0.0.2 ping statistics --- 00:08:45.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.920 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:08:45.920 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:45.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:08:45.920 00:08:45.920 --- 10.0.0.1 ping statistics --- 00:08:45.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.920 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:08:45.920 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.920 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:08:45.920 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:45.920 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.920 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:45.920 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:45.920 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.920 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:45.920 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:45.920 00:55:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:45.920 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:45.920 00:55:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:45.920 00:55:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.920 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1182687 00:08:45.920 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:45.920 00:55:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1182687 00:08:45.920 00:55:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 1182687 ']' 00:08:45.920 00:55:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.920 00:55:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:45.920 00:55:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.920 00:55:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:45.920 00:55:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.920 [2024-05-15 00:55:58.212836] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:08:45.920 [2024-05-15 00:55:58.212924] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.920 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.920 [2024-05-15 00:55:58.291848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:46.178 [2024-05-15 00:55:58.416436] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.178 [2024-05-15 00:55:58.416496] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.178 [2024-05-15 00:55:58.416512] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:46.178 [2024-05-15 00:55:58.416525] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:46.178 [2024-05-15 00:55:58.416537] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.178 [2024-05-15 00:55:58.416634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.178 [2024-05-15 00:55:58.419959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:46.178 [2024-05-15 00:55:58.419972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.110 00:55:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:47.110 00:55:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:08:47.110 00:55:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:47.110 00:55:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:47.110 00:55:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.110 00:55:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.110 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:47.110 00:55:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.110 00:55:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.110 [2024-05-15 00:55:59.182088] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.110 00:55:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.111 [2024-05-15 00:55:59.199271] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:47.111 [2024-05-15 00:55:59.208066] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.111 NULL1 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1182842 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.111 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.111 00:55:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.369 00:55:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.369 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:47.369 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.369 00:55:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.369 00:55:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.627 00:55:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.627 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:47.627 00:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.627 00:55:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.627 00:55:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.885 00:56:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.886 00:56:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:47.886 00:56:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.886 00:56:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.886 00:56:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.449 00:56:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.449 00:56:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:48.449 00:56:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.449 00:56:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.449 00:56:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.707 00:56:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.707 00:56:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:48.707 00:56:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.707 00:56:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.707 00:56:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.964 00:56:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.964 00:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:48.964 00:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.964 00:56:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.964 00:56:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:49.220 00:56:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.220 00:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:49.220 00:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:49.220 00:56:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.220 00:56:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:49.478 00:56:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.478 00:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:49.478 00:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:49.478 00:56:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.478 00:56:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:50.043 00:56:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.043 00:56:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:50.043 00:56:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:50.043 00:56:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.043 00:56:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:50.300 00:56:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.300 00:56:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:50.300 00:56:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:50.301 00:56:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.301 00:56:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:50.558 00:56:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.558 00:56:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:50.558 00:56:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:50.558 00:56:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.558 00:56:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:50.815 00:56:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.815 00:56:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:50.815 00:56:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:50.815 00:56:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.815 00:56:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:51.073 00:56:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.073 00:56:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:51.073 00:56:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:51.073 00:56:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.073 00:56:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:51.635 00:56:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.635 00:56:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:51.635 00:56:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:51.635 00:56:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.635 00:56:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:51.892 00:56:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.892 00:56:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:51.892 00:56:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:51.892 00:56:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.892 00:56:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:52.149 00:56:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.149 00:56:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:52.149 00:56:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:52.149 00:56:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.149 00:56:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:52.406 00:56:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.406 00:56:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:52.406 00:56:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:52.406 00:56:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.406 00:56:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:52.663 00:56:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.663 00:56:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:52.663 00:56:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:52.663 00:56:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.663 00:56:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:53.229 00:56:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.229 00:56:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:53.229 00:56:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:53.229 00:56:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.229 00:56:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:53.486 00:56:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.486 00:56:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:53.486 00:56:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:53.486 00:56:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.487 00:56:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:53.744 00:56:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.744 00:56:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:53.744 00:56:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:53.744 00:56:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.744 00:56:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:54.002 00:56:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.002 00:56:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:54.002 00:56:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:54.002 00:56:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.002 00:56:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:54.567 00:56:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.567 00:56:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:54.567 00:56:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:54.567 00:56:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.567 00:56:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:54.824 00:56:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.824 00:56:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:54.824 00:56:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:54.824 00:56:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.824 00:56:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:55.081 00:56:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.081 00:56:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:55.081 00:56:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:55.081 00:56:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.081 00:56:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:55.339 00:56:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.339 00:56:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:55.339 00:56:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:55.339 00:56:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.339 00:56:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:55.598 00:56:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.598 00:56:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:55.598 00:56:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:55.598 00:56:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.598 00:56:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:56.221 00:56:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.221 00:56:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:56.221 00:56:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:56.221 00:56:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.221 00:56:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:56.221 00:56:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.221 00:56:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:56.221 00:56:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:56.221 00:56:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.221 00:56:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:56.786 00:56:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.786 00:56:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:56.786 00:56:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:56.786 00:56:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.786 00:56:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:57.044 00:56:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.044 00:56:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:57.044 00:56:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:57.044 00:56:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:57.044 00:56:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:57.044 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:57.302 00:56:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:57.302 00:56:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1182842 00:08:57.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1182842) - No such process 00:08:57.302 00:56:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1182842 00:08:57.302 00:56:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:57.302 00:56:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:57.302 00:56:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:08:57.302 00:56:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:57.302 00:56:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:08:57.302 00:56:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:57.302 00:56:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:08:57.302 00:56:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:57.302 00:56:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:57.302 rmmod nvme_tcp 00:08:57.302 rmmod nvme_fabrics 00:08:57.302 rmmod nvme_keyring 00:08:57.302 00:56:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:57.302 00:56:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:08:57.302 00:56:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:08:57.302 00:56:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1182687 ']' 00:08:57.302 00:56:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1182687 00:08:57.302 00:56:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 1182687 ']' 00:08:57.302 00:56:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 1182687 00:08:57.302 00:56:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:08:57.302 00:56:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:57.302 00:56:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1182687 00:08:57.302 00:56:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:08:57.302 00:56:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:08:57.302 00:56:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1182687' 00:08:57.302 killing process with pid 1182687 00:08:57.302 00:56:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 1182687 00:08:57.302 [2024-05-15 00:56:09.617026] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:57.303 00:56:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 1182687 00:08:57.562 00:56:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:57.562 00:56:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:57.562 00:56:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:57.562 00:56:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:57.562 00:56:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:57.562 00:56:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.562 00:56:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:57.562 00:56:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.096 00:56:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:00.096 00:09:00.096 real 0m16.482s 00:09:00.096 user 0m40.111s 00:09:00.096 sys 0m6.488s 00:09:00.096 00:56:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:00.096 00:56:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:00.096 ************************************ 00:09:00.096 END TEST nvmf_connect_stress 00:09:00.096 ************************************ 00:09:00.096 00:56:11 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:00.097 00:56:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:00.097 00:56:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:00.097 00:56:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:00.097 ************************************ 00:09:00.097 START TEST nvmf_fused_ordering 00:09:00.097 ************************************ 00:09:00.097 00:56:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:00.097 * Looking for test storage... 00:09:00.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:09:00.097 00:56:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:02.628 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:02.628 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:09:02.628 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:02.628 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:02.628 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:02.628 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:02.628 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:02.628 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:09:02.628 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:02.628 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:09:02.628 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:09:02.628 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:09:02.628 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:09:02.628 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:09:02.628 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:09:02.628 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:02.628 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:02.628 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:02.628 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:02.628 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:02.628 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:02.628 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:02.628 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:02.628 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:02.628 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:02.628 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:02.628 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:02.628 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:02.628 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:02.628 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:02.628 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:02.629 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:02.629 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:02.629 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:02.629 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:02.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:09:02.629 00:09:02.629 --- 10.0.0.2 ping statistics --- 00:09:02.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.629 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:02.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:09:02.629 00:09:02.629 --- 10.0.0.1 ping statistics --- 00:09:02.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.629 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1186404 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1186404 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 1186404 ']' 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:02.629 00:56:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:02.629 [2024-05-15 00:56:14.712994] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:09:02.629 [2024-05-15 00:56:14.713072] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.629 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.629 [2024-05-15 00:56:14.794678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.629 [2024-05-15 00:56:14.909702] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.629 [2024-05-15 00:56:14.909761] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.629 [2024-05-15 00:56:14.909777] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.629 [2024-05-15 00:56:14.909791] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.629 [2024-05-15 00:56:14.909802] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.629 [2024-05-15 00:56:14.909840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.564 00:56:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:03.564 00:56:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:09:03.564 00:56:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:03.564 00:56:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:03.564 00:56:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:03.564 00:56:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.564 00:56:15 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:03.564 00:56:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.564 00:56:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:03.564 [2024-05-15 00:56:15.681478] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.564 00:56:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.564 00:56:15 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:03.564 00:56:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.564 00:56:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:03.564 00:56:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.564 00:56:15 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:03.564 00:56:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.564 00:56:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:03.564 [2024-05-15 00:56:15.697438] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:03.564 [2024-05-15 00:56:15.697704] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.564 00:56:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.564 00:56:15 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:03.564 00:56:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.564 00:56:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:03.564 NULL1 00:09:03.564 00:56:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.564 00:56:15 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:09:03.564 00:56:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.564 00:56:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:03.564 00:56:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.564 00:56:15 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:03.564 00:56:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.564 00:56:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:03.564 00:56:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.564 00:56:15 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:03.564 [2024-05-15 00:56:15.742689] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:09:03.564 [2024-05-15 00:56:15.742732] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1186557 ] 00:09:03.564 EAL: No free 2048 kB hugepages reported on node 1 00:09:04.499 Attached to nqn.2016-06.io.spdk:cnode1 00:09:04.499 Namespace ID: 1 size: 1GB 00:09:04.499 fused_ordering(0) 00:09:04.499 fused_ordering(1) 00:09:04.499 fused_ordering(2) 00:09:04.499 fused_ordering(3) 00:09:04.499 fused_ordering(4) 00:09:04.499 fused_ordering(5) 00:09:04.499 fused_ordering(6) 00:09:04.499 fused_ordering(7) 00:09:04.499 fused_ordering(8) 00:09:04.499 fused_ordering(9) 00:09:04.499 fused_ordering(10) 00:09:04.499 fused_ordering(11) 00:09:04.499 fused_ordering(12) 00:09:04.499 fused_ordering(13) 00:09:04.499 fused_ordering(14) 00:09:04.499 fused_ordering(15) 00:09:04.499 fused_ordering(16) 00:09:04.499 fused_ordering(17) 00:09:04.499 fused_ordering(18) 00:09:04.499 fused_ordering(19) 00:09:04.499 fused_ordering(20) 00:09:04.499 fused_ordering(21) 00:09:04.499 fused_ordering(22) 00:09:04.499 fused_ordering(23) 00:09:04.499 fused_ordering(24) 00:09:04.499 fused_ordering(25) 00:09:04.499 fused_ordering(26) 00:09:04.499 fused_ordering(27) 00:09:04.499 fused_ordering(28) 00:09:04.499 fused_ordering(29) 00:09:04.499 fused_ordering(30) 00:09:04.499 fused_ordering(31) 00:09:04.499 fused_ordering(32) 00:09:04.499 fused_ordering(33) 00:09:04.499 fused_ordering(34) 00:09:04.499 fused_ordering(35) 00:09:04.499 fused_ordering(36) 00:09:04.499 fused_ordering(37) 00:09:04.499 fused_ordering(38) 00:09:04.499 fused_ordering(39) 00:09:04.499 fused_ordering(40) 00:09:04.499 fused_ordering(41) 00:09:04.499 fused_ordering(42) 00:09:04.499 fused_ordering(43) 00:09:04.499 fused_ordering(44) 00:09:04.499 fused_ordering(45) 00:09:04.499 fused_ordering(46) 00:09:04.499 fused_ordering(47) 00:09:04.499 fused_ordering(48) 00:09:04.499 fused_ordering(49) 00:09:04.499 fused_ordering(50) 00:09:04.499 fused_ordering(51) 00:09:04.499 fused_ordering(52) 00:09:04.499 fused_ordering(53) 00:09:04.499 fused_ordering(54) 00:09:04.499 fused_ordering(55) 00:09:04.499 fused_ordering(56) 00:09:04.499 fused_ordering(57) 00:09:04.499 fused_ordering(58) 00:09:04.499 fused_ordering(59) 00:09:04.499 fused_ordering(60) 00:09:04.499 fused_ordering(61) 00:09:04.499 fused_ordering(62) 00:09:04.499 fused_ordering(63) 00:09:04.499 fused_ordering(64) 00:09:04.499 fused_ordering(65) 00:09:04.499 fused_ordering(66) 00:09:04.499 fused_ordering(67) 00:09:04.499 fused_ordering(68) 00:09:04.499 fused_ordering(69) 00:09:04.499 fused_ordering(70) 00:09:04.499 fused_ordering(71) 00:09:04.499 fused_ordering(72) 00:09:04.499 fused_ordering(73) 00:09:04.499 fused_ordering(74) 00:09:04.499 fused_ordering(75) 00:09:04.499 fused_ordering(76) 00:09:04.499 fused_ordering(77) 00:09:04.499 fused_ordering(78) 00:09:04.499 fused_ordering(79) 00:09:04.499 fused_ordering(80) 00:09:04.499 fused_ordering(81) 00:09:04.499 fused_ordering(82) 00:09:04.499 fused_ordering(83) 00:09:04.499 fused_ordering(84) 00:09:04.499 fused_ordering(85) 00:09:04.499 fused_ordering(86) 00:09:04.499 fused_ordering(87) 00:09:04.499 fused_ordering(88) 00:09:04.499 fused_ordering(89) 00:09:04.499 fused_ordering(90) 00:09:04.499 fused_ordering(91) 00:09:04.499 fused_ordering(92) 00:09:04.499 fused_ordering(93) 00:09:04.499 fused_ordering(94) 00:09:04.499 fused_ordering(95) 00:09:04.499 fused_ordering(96) 00:09:04.499 fused_ordering(97) 00:09:04.499 fused_ordering(98) 00:09:04.499 fused_ordering(99) 00:09:04.499 fused_ordering(100) 00:09:04.499 fused_ordering(101) 00:09:04.499 fused_ordering(102) 00:09:04.499 fused_ordering(103) 00:09:04.499 fused_ordering(104) 00:09:04.499 fused_ordering(105) 00:09:04.499 fused_ordering(106) 00:09:04.499 fused_ordering(107) 00:09:04.499 fused_ordering(108) 00:09:04.499 fused_ordering(109) 00:09:04.499 fused_ordering(110) 00:09:04.499 fused_ordering(111) 00:09:04.499 fused_ordering(112) 00:09:04.499 fused_ordering(113) 00:09:04.499 fused_ordering(114) 00:09:04.499 fused_ordering(115) 00:09:04.499 fused_ordering(116) 00:09:04.499 fused_ordering(117) 00:09:04.499 fused_ordering(118) 00:09:04.499 fused_ordering(119) 00:09:04.499 fused_ordering(120) 00:09:04.499 fused_ordering(121) 00:09:04.499 fused_ordering(122) 00:09:04.499 fused_ordering(123) 00:09:04.499 fused_ordering(124) 00:09:04.499 fused_ordering(125) 00:09:04.499 fused_ordering(126) 00:09:04.499 fused_ordering(127) 00:09:04.499 fused_ordering(128) 00:09:04.499 fused_ordering(129) 00:09:04.499 fused_ordering(130) 00:09:04.499 fused_ordering(131) 00:09:04.499 fused_ordering(132) 00:09:04.499 fused_ordering(133) 00:09:04.499 fused_ordering(134) 00:09:04.499 fused_ordering(135) 00:09:04.499 fused_ordering(136) 00:09:04.499 fused_ordering(137) 00:09:04.499 fused_ordering(138) 00:09:04.499 fused_ordering(139) 00:09:04.499 fused_ordering(140) 00:09:04.499 fused_ordering(141) 00:09:04.499 fused_ordering(142) 00:09:04.499 fused_ordering(143) 00:09:04.499 fused_ordering(144) 00:09:04.499 fused_ordering(145) 00:09:04.499 fused_ordering(146) 00:09:04.499 fused_ordering(147) 00:09:04.499 fused_ordering(148) 00:09:04.499 fused_ordering(149) 00:09:04.499 fused_ordering(150) 00:09:04.499 fused_ordering(151) 00:09:04.499 fused_ordering(152) 00:09:04.499 fused_ordering(153) 00:09:04.499 fused_ordering(154) 00:09:04.499 fused_ordering(155) 00:09:04.499 fused_ordering(156) 00:09:04.499 fused_ordering(157) 00:09:04.499 fused_ordering(158) 00:09:04.499 fused_ordering(159) 00:09:04.499 fused_ordering(160) 00:09:04.499 fused_ordering(161) 00:09:04.499 fused_ordering(162) 00:09:04.499 fused_ordering(163) 00:09:04.499 fused_ordering(164) 00:09:04.499 fused_ordering(165) 00:09:04.499 fused_ordering(166) 00:09:04.499 fused_ordering(167) 00:09:04.499 fused_ordering(168) 00:09:04.499 fused_ordering(169) 00:09:04.499 fused_ordering(170) 00:09:04.499 fused_ordering(171) 00:09:04.499 fused_ordering(172) 00:09:04.499 fused_ordering(173) 00:09:04.499 fused_ordering(174) 00:09:04.499 fused_ordering(175) 00:09:04.499 fused_ordering(176) 00:09:04.499 fused_ordering(177) 00:09:04.499 fused_ordering(178) 00:09:04.499 fused_ordering(179) 00:09:04.499 fused_ordering(180) 00:09:04.499 fused_ordering(181) 00:09:04.499 fused_ordering(182) 00:09:04.499 fused_ordering(183) 00:09:04.499 fused_ordering(184) 00:09:04.499 fused_ordering(185) 00:09:04.499 fused_ordering(186) 00:09:04.499 fused_ordering(187) 00:09:04.499 fused_ordering(188) 00:09:04.499 fused_ordering(189) 00:09:04.499 fused_ordering(190) 00:09:04.499 fused_ordering(191) 00:09:04.499 fused_ordering(192) 00:09:04.499 fused_ordering(193) 00:09:04.499 fused_ordering(194) 00:09:04.499 fused_ordering(195) 00:09:04.499 fused_ordering(196) 00:09:04.499 fused_ordering(197) 00:09:04.499 fused_ordering(198) 00:09:04.499 fused_ordering(199) 00:09:04.499 fused_ordering(200) 00:09:04.499 fused_ordering(201) 00:09:04.499 fused_ordering(202) 00:09:04.499 fused_ordering(203) 00:09:04.499 fused_ordering(204) 00:09:04.499 fused_ordering(205) 00:09:05.066 fused_ordering(206) 00:09:05.066 fused_ordering(207) 00:09:05.066 fused_ordering(208) 00:09:05.066 fused_ordering(209) 00:09:05.066 fused_ordering(210) 00:09:05.066 fused_ordering(211) 00:09:05.066 fused_ordering(212) 00:09:05.066 fused_ordering(213) 00:09:05.066 fused_ordering(214) 00:09:05.066 fused_ordering(215) 00:09:05.066 fused_ordering(216) 00:09:05.066 fused_ordering(217) 00:09:05.066 fused_ordering(218) 00:09:05.066 fused_ordering(219) 00:09:05.066 fused_ordering(220) 00:09:05.066 fused_ordering(221) 00:09:05.066 fused_ordering(222) 00:09:05.066 fused_ordering(223) 00:09:05.066 fused_ordering(224) 00:09:05.066 fused_ordering(225) 00:09:05.066 fused_ordering(226) 00:09:05.066 fused_ordering(227) 00:09:05.066 fused_ordering(228) 00:09:05.066 fused_ordering(229) 00:09:05.066 fused_ordering(230) 00:09:05.066 fused_ordering(231) 00:09:05.066 fused_ordering(232) 00:09:05.066 fused_ordering(233) 00:09:05.066 fused_ordering(234) 00:09:05.066 fused_ordering(235) 00:09:05.066 fused_ordering(236) 00:09:05.066 fused_ordering(237) 00:09:05.066 fused_ordering(238) 00:09:05.066 fused_ordering(239) 00:09:05.066 fused_ordering(240) 00:09:05.066 fused_ordering(241) 00:09:05.066 fused_ordering(242) 00:09:05.066 fused_ordering(243) 00:09:05.066 fused_ordering(244) 00:09:05.066 fused_ordering(245) 00:09:05.066 fused_ordering(246) 00:09:05.066 fused_ordering(247) 00:09:05.066 fused_ordering(248) 00:09:05.066 fused_ordering(249) 00:09:05.066 fused_ordering(250) 00:09:05.066 fused_ordering(251) 00:09:05.066 fused_ordering(252) 00:09:05.066 fused_ordering(253) 00:09:05.066 fused_ordering(254) 00:09:05.066 fused_ordering(255) 00:09:05.066 fused_ordering(256) 00:09:05.066 fused_ordering(257) 00:09:05.066 fused_ordering(258) 00:09:05.066 fused_ordering(259) 00:09:05.066 fused_ordering(260) 00:09:05.066 fused_ordering(261) 00:09:05.066 fused_ordering(262) 00:09:05.066 fused_ordering(263) 00:09:05.066 fused_ordering(264) 00:09:05.066 fused_ordering(265) 00:09:05.066 fused_ordering(266) 00:09:05.066 fused_ordering(267) 00:09:05.066 fused_ordering(268) 00:09:05.066 fused_ordering(269) 00:09:05.066 fused_ordering(270) 00:09:05.066 fused_ordering(271) 00:09:05.066 fused_ordering(272) 00:09:05.066 fused_ordering(273) 00:09:05.066 fused_ordering(274) 00:09:05.066 fused_ordering(275) 00:09:05.066 fused_ordering(276) 00:09:05.066 fused_ordering(277) 00:09:05.066 fused_ordering(278) 00:09:05.066 fused_ordering(279) 00:09:05.066 fused_ordering(280) 00:09:05.066 fused_ordering(281) 00:09:05.066 fused_ordering(282) 00:09:05.066 fused_ordering(283) 00:09:05.066 fused_ordering(284) 00:09:05.066 fused_ordering(285) 00:09:05.066 fused_ordering(286) 00:09:05.066 fused_ordering(287) 00:09:05.066 fused_ordering(288) 00:09:05.066 fused_ordering(289) 00:09:05.066 fused_ordering(290) 00:09:05.066 fused_ordering(291) 00:09:05.066 fused_ordering(292) 00:09:05.066 fused_ordering(293) 00:09:05.066 fused_ordering(294) 00:09:05.066 fused_ordering(295) 00:09:05.066 fused_ordering(296) 00:09:05.066 fused_ordering(297) 00:09:05.066 fused_ordering(298) 00:09:05.066 fused_ordering(299) 00:09:05.066 fused_ordering(300) 00:09:05.066 fused_ordering(301) 00:09:05.066 fused_ordering(302) 00:09:05.066 fused_ordering(303) 00:09:05.066 fused_ordering(304) 00:09:05.066 fused_ordering(305) 00:09:05.066 fused_ordering(306) 00:09:05.067 fused_ordering(307) 00:09:05.067 fused_ordering(308) 00:09:05.067 fused_ordering(309) 00:09:05.067 fused_ordering(310) 00:09:05.067 fused_ordering(311) 00:09:05.067 fused_ordering(312) 00:09:05.067 fused_ordering(313) 00:09:05.067 fused_ordering(314) 00:09:05.067 fused_ordering(315) 00:09:05.067 fused_ordering(316) 00:09:05.067 fused_ordering(317) 00:09:05.067 fused_ordering(318) 00:09:05.067 fused_ordering(319) 00:09:05.067 fused_ordering(320) 00:09:05.067 fused_ordering(321) 00:09:05.067 fused_ordering(322) 00:09:05.067 fused_ordering(323) 00:09:05.067 fused_ordering(324) 00:09:05.067 fused_ordering(325) 00:09:05.067 fused_ordering(326) 00:09:05.067 fused_ordering(327) 00:09:05.067 fused_ordering(328) 00:09:05.067 fused_ordering(329) 00:09:05.067 fused_ordering(330) 00:09:05.067 fused_ordering(331) 00:09:05.067 fused_ordering(332) 00:09:05.067 fused_ordering(333) 00:09:05.067 fused_ordering(334) 00:09:05.067 fused_ordering(335) 00:09:05.067 fused_ordering(336) 00:09:05.067 fused_ordering(337) 00:09:05.067 fused_ordering(338) 00:09:05.067 fused_ordering(339) 00:09:05.067 fused_ordering(340) 00:09:05.067 fused_ordering(341) 00:09:05.067 fused_ordering(342) 00:09:05.067 fused_ordering(343) 00:09:05.067 fused_ordering(344) 00:09:05.067 fused_ordering(345) 00:09:05.067 fused_ordering(346) 00:09:05.067 fused_ordering(347) 00:09:05.067 fused_ordering(348) 00:09:05.067 fused_ordering(349) 00:09:05.067 fused_ordering(350) 00:09:05.067 fused_ordering(351) 00:09:05.067 fused_ordering(352) 00:09:05.067 fused_ordering(353) 00:09:05.067 fused_ordering(354) 00:09:05.067 fused_ordering(355) 00:09:05.067 fused_ordering(356) 00:09:05.067 fused_ordering(357) 00:09:05.067 fused_ordering(358) 00:09:05.067 fused_ordering(359) 00:09:05.067 fused_ordering(360) 00:09:05.067 fused_ordering(361) 00:09:05.067 fused_ordering(362) 00:09:05.067 fused_ordering(363) 00:09:05.067 fused_ordering(364) 00:09:05.067 fused_ordering(365) 00:09:05.067 fused_ordering(366) 00:09:05.067 fused_ordering(367) 00:09:05.067 fused_ordering(368) 00:09:05.067 fused_ordering(369) 00:09:05.067 fused_ordering(370) 00:09:05.067 fused_ordering(371) 00:09:05.067 fused_ordering(372) 00:09:05.067 fused_ordering(373) 00:09:05.067 fused_ordering(374) 00:09:05.067 fused_ordering(375) 00:09:05.067 fused_ordering(376) 00:09:05.067 fused_ordering(377) 00:09:05.067 fused_ordering(378) 00:09:05.067 fused_ordering(379) 00:09:05.067 fused_ordering(380) 00:09:05.067 fused_ordering(381) 00:09:05.067 fused_ordering(382) 00:09:05.067 fused_ordering(383) 00:09:05.067 fused_ordering(384) 00:09:05.067 fused_ordering(385) 00:09:05.067 fused_ordering(386) 00:09:05.067 fused_ordering(387) 00:09:05.067 fused_ordering(388) 00:09:05.067 fused_ordering(389) 00:09:05.067 fused_ordering(390) 00:09:05.067 fused_ordering(391) 00:09:05.067 fused_ordering(392) 00:09:05.067 fused_ordering(393) 00:09:05.067 fused_ordering(394) 00:09:05.067 fused_ordering(395) 00:09:05.067 fused_ordering(396) 00:09:05.067 fused_ordering(397) 00:09:05.067 fused_ordering(398) 00:09:05.067 fused_ordering(399) 00:09:05.067 fused_ordering(400) 00:09:05.067 fused_ordering(401) 00:09:05.067 fused_ordering(402) 00:09:05.067 fused_ordering(403) 00:09:05.067 fused_ordering(404) 00:09:05.067 fused_ordering(405) 00:09:05.067 fused_ordering(406) 00:09:05.067 fused_ordering(407) 00:09:05.067 fused_ordering(408) 00:09:05.067 fused_ordering(409) 00:09:05.067 fused_ordering(410) 00:09:05.633 fused_ordering(411) 00:09:05.633 fused_ordering(412) 00:09:05.633 fused_ordering(413) 00:09:05.633 fused_ordering(414) 00:09:05.633 fused_ordering(415) 00:09:05.633 fused_ordering(416) 00:09:05.633 fused_ordering(417) 00:09:05.633 fused_ordering(418) 00:09:05.633 fused_ordering(419) 00:09:05.633 fused_ordering(420) 00:09:05.633 fused_ordering(421) 00:09:05.633 fused_ordering(422) 00:09:05.633 fused_ordering(423) 00:09:05.633 fused_ordering(424) 00:09:05.633 fused_ordering(425) 00:09:05.633 fused_ordering(426) 00:09:05.633 fused_ordering(427) 00:09:05.633 fused_ordering(428) 00:09:05.633 fused_ordering(429) 00:09:05.633 fused_ordering(430) 00:09:05.633 fused_ordering(431) 00:09:05.633 fused_ordering(432) 00:09:05.633 fused_ordering(433) 00:09:05.633 fused_ordering(434) 00:09:05.633 fused_ordering(435) 00:09:05.633 fused_ordering(436) 00:09:05.633 fused_ordering(437) 00:09:05.633 fused_ordering(438) 00:09:05.633 fused_ordering(439) 00:09:05.633 fused_ordering(440) 00:09:05.633 fused_ordering(441) 00:09:05.633 fused_ordering(442) 00:09:05.633 fused_ordering(443) 00:09:05.633 fused_ordering(444) 00:09:05.633 fused_ordering(445) 00:09:05.633 fused_ordering(446) 00:09:05.633 fused_ordering(447) 00:09:05.633 fused_ordering(448) 00:09:05.633 fused_ordering(449) 00:09:05.633 fused_ordering(450) 00:09:05.633 fused_ordering(451) 00:09:05.633 fused_ordering(452) 00:09:05.633 fused_ordering(453) 00:09:05.633 fused_ordering(454) 00:09:05.633 fused_ordering(455) 00:09:05.633 fused_ordering(456) 00:09:05.633 fused_ordering(457) 00:09:05.633 fused_ordering(458) 00:09:05.633 fused_ordering(459) 00:09:05.633 fused_ordering(460) 00:09:05.633 fused_ordering(461) 00:09:05.633 fused_ordering(462) 00:09:05.633 fused_ordering(463) 00:09:05.633 fused_ordering(464) 00:09:05.633 fused_ordering(465) 00:09:05.633 fused_ordering(466) 00:09:05.633 fused_ordering(467) 00:09:05.633 fused_ordering(468) 00:09:05.633 fused_ordering(469) 00:09:05.633 fused_ordering(470) 00:09:05.633 fused_ordering(471) 00:09:05.633 fused_ordering(472) 00:09:05.633 fused_ordering(473) 00:09:05.633 fused_ordering(474) 00:09:05.633 fused_ordering(475) 00:09:05.633 fused_ordering(476) 00:09:05.633 fused_ordering(477) 00:09:05.633 fused_ordering(478) 00:09:05.633 fused_ordering(479) 00:09:05.633 fused_ordering(480) 00:09:05.633 fused_ordering(481) 00:09:05.633 fused_ordering(482) 00:09:05.633 fused_ordering(483) 00:09:05.633 fused_ordering(484) 00:09:05.633 fused_ordering(485) 00:09:05.633 fused_ordering(486) 00:09:05.633 fused_ordering(487) 00:09:05.633 fused_ordering(488) 00:09:05.633 fused_ordering(489) 00:09:05.633 fused_ordering(490) 00:09:05.633 fused_ordering(491) 00:09:05.633 fused_ordering(492) 00:09:05.633 fused_ordering(493) 00:09:05.633 fused_ordering(494) 00:09:05.633 fused_ordering(495) 00:09:05.633 fused_ordering(496) 00:09:05.633 fused_ordering(497) 00:09:05.633 fused_ordering(498) 00:09:05.633 fused_ordering(499) 00:09:05.633 fused_ordering(500) 00:09:05.633 fused_ordering(501) 00:09:05.633 fused_ordering(502) 00:09:05.633 fused_ordering(503) 00:09:05.633 fused_ordering(504) 00:09:05.633 fused_ordering(505) 00:09:05.633 fused_ordering(506) 00:09:05.633 fused_ordering(507) 00:09:05.633 fused_ordering(508) 00:09:05.634 fused_ordering(509) 00:09:05.634 fused_ordering(510) 00:09:05.634 fused_ordering(511) 00:09:05.634 fused_ordering(512) 00:09:05.634 fused_ordering(513) 00:09:05.634 fused_ordering(514) 00:09:05.634 fused_ordering(515) 00:09:05.634 fused_ordering(516) 00:09:05.634 fused_ordering(517) 00:09:05.634 fused_ordering(518) 00:09:05.634 fused_ordering(519) 00:09:05.634 fused_ordering(520) 00:09:05.634 fused_ordering(521) 00:09:05.634 fused_ordering(522) 00:09:05.634 fused_ordering(523) 00:09:05.634 fused_ordering(524) 00:09:05.634 fused_ordering(525) 00:09:05.634 fused_ordering(526) 00:09:05.634 fused_ordering(527) 00:09:05.634 fused_ordering(528) 00:09:05.634 fused_ordering(529) 00:09:05.634 fused_ordering(530) 00:09:05.634 fused_ordering(531) 00:09:05.634 fused_ordering(532) 00:09:05.634 fused_ordering(533) 00:09:05.634 fused_ordering(534) 00:09:05.634 fused_ordering(535) 00:09:05.634 fused_ordering(536) 00:09:05.634 fused_ordering(537) 00:09:05.634 fused_ordering(538) 00:09:05.634 fused_ordering(539) 00:09:05.634 fused_ordering(540) 00:09:05.634 fused_ordering(541) 00:09:05.634 fused_ordering(542) 00:09:05.634 fused_ordering(543) 00:09:05.634 fused_ordering(544) 00:09:05.634 fused_ordering(545) 00:09:05.634 fused_ordering(546) 00:09:05.634 fused_ordering(547) 00:09:05.634 fused_ordering(548) 00:09:05.634 fused_ordering(549) 00:09:05.634 fused_ordering(550) 00:09:05.634 fused_ordering(551) 00:09:05.634 fused_ordering(552) 00:09:05.634 fused_ordering(553) 00:09:05.634 fused_ordering(554) 00:09:05.634 fused_ordering(555) 00:09:05.634 fused_ordering(556) 00:09:05.634 fused_ordering(557) 00:09:05.634 fused_ordering(558) 00:09:05.634 fused_ordering(559) 00:09:05.634 fused_ordering(560) 00:09:05.634 fused_ordering(561) 00:09:05.634 fused_ordering(562) 00:09:05.634 fused_ordering(563) 00:09:05.634 fused_ordering(564) 00:09:05.634 fused_ordering(565) 00:09:05.634 fused_ordering(566) 00:09:05.634 fused_ordering(567) 00:09:05.634 fused_ordering(568) 00:09:05.634 fused_ordering(569) 00:09:05.634 fused_ordering(570) 00:09:05.634 fused_ordering(571) 00:09:05.634 fused_ordering(572) 00:09:05.634 fused_ordering(573) 00:09:05.634 fused_ordering(574) 00:09:05.634 fused_ordering(575) 00:09:05.634 fused_ordering(576) 00:09:05.634 fused_ordering(577) 00:09:05.634 fused_ordering(578) 00:09:05.634 fused_ordering(579) 00:09:05.634 fused_ordering(580) 00:09:05.634 fused_ordering(581) 00:09:05.634 fused_ordering(582) 00:09:05.634 fused_ordering(583) 00:09:05.634 fused_ordering(584) 00:09:05.634 fused_ordering(585) 00:09:05.634 fused_ordering(586) 00:09:05.634 fused_ordering(587) 00:09:05.634 fused_ordering(588) 00:09:05.634 fused_ordering(589) 00:09:05.634 fused_ordering(590) 00:09:05.634 fused_ordering(591) 00:09:05.634 fused_ordering(592) 00:09:05.634 fused_ordering(593) 00:09:05.634 fused_ordering(594) 00:09:05.634 fused_ordering(595) 00:09:05.634 fused_ordering(596) 00:09:05.634 fused_ordering(597) 00:09:05.634 fused_ordering(598) 00:09:05.634 fused_ordering(599) 00:09:05.634 fused_ordering(600) 00:09:05.634 fused_ordering(601) 00:09:05.634 fused_ordering(602) 00:09:05.634 fused_ordering(603) 00:09:05.634 fused_ordering(604) 00:09:05.634 fused_ordering(605) 00:09:05.634 fused_ordering(606) 00:09:05.634 fused_ordering(607) 00:09:05.634 fused_ordering(608) 00:09:05.634 fused_ordering(609) 00:09:05.634 fused_ordering(610) 00:09:05.634 fused_ordering(611) 00:09:05.634 fused_ordering(612) 00:09:05.634 fused_ordering(613) 00:09:05.634 fused_ordering(614) 00:09:05.634 fused_ordering(615) 00:09:06.569 fused_ordering(616) 00:09:06.569 fused_ordering(617) 00:09:06.569 fused_ordering(618) 00:09:06.569 fused_ordering(619) 00:09:06.569 fused_ordering(620) 00:09:06.569 fused_ordering(621) 00:09:06.569 fused_ordering(622) 00:09:06.569 fused_ordering(623) 00:09:06.569 fused_ordering(624) 00:09:06.569 fused_ordering(625) 00:09:06.569 fused_ordering(626) 00:09:06.569 fused_ordering(627) 00:09:06.569 fused_ordering(628) 00:09:06.569 fused_ordering(629) 00:09:06.569 fused_ordering(630) 00:09:06.569 fused_ordering(631) 00:09:06.569 fused_ordering(632) 00:09:06.569 fused_ordering(633) 00:09:06.569 fused_ordering(634) 00:09:06.569 fused_ordering(635) 00:09:06.569 fused_ordering(636) 00:09:06.569 fused_ordering(637) 00:09:06.569 fused_ordering(638) 00:09:06.569 fused_ordering(639) 00:09:06.569 fused_ordering(640) 00:09:06.569 fused_ordering(641) 00:09:06.569 fused_ordering(642) 00:09:06.569 fused_ordering(643) 00:09:06.569 fused_ordering(644) 00:09:06.569 fused_ordering(645) 00:09:06.569 fused_ordering(646) 00:09:06.569 fused_ordering(647) 00:09:06.569 fused_ordering(648) 00:09:06.569 fused_ordering(649) 00:09:06.569 fused_ordering(650) 00:09:06.569 fused_ordering(651) 00:09:06.569 fused_ordering(652) 00:09:06.569 fused_ordering(653) 00:09:06.569 fused_ordering(654) 00:09:06.569 fused_ordering(655) 00:09:06.569 fused_ordering(656) 00:09:06.569 fused_ordering(657) 00:09:06.569 fused_ordering(658) 00:09:06.569 fused_ordering(659) 00:09:06.569 fused_ordering(660) 00:09:06.569 fused_ordering(661) 00:09:06.569 fused_ordering(662) 00:09:06.569 fused_ordering(663) 00:09:06.569 fused_ordering(664) 00:09:06.569 fused_ordering(665) 00:09:06.569 fused_ordering(666) 00:09:06.569 fused_ordering(667) 00:09:06.569 fused_ordering(668) 00:09:06.569 fused_ordering(669) 00:09:06.569 fused_ordering(670) 00:09:06.569 fused_ordering(671) 00:09:06.569 fused_ordering(672) 00:09:06.569 fused_ordering(673) 00:09:06.569 fused_ordering(674) 00:09:06.569 fused_ordering(675) 00:09:06.569 fused_ordering(676) 00:09:06.569 fused_ordering(677) 00:09:06.569 fused_ordering(678) 00:09:06.569 fused_ordering(679) 00:09:06.569 fused_ordering(680) 00:09:06.569 fused_ordering(681) 00:09:06.569 fused_ordering(682) 00:09:06.569 fused_ordering(683) 00:09:06.569 fused_ordering(684) 00:09:06.569 fused_ordering(685) 00:09:06.569 fused_ordering(686) 00:09:06.569 fused_ordering(687) 00:09:06.569 fused_ordering(688) 00:09:06.569 fused_ordering(689) 00:09:06.569 fused_ordering(690) 00:09:06.569 fused_ordering(691) 00:09:06.569 fused_ordering(692) 00:09:06.569 fused_ordering(693) 00:09:06.569 fused_ordering(694) 00:09:06.569 fused_ordering(695) 00:09:06.569 fused_ordering(696) 00:09:06.569 fused_ordering(697) 00:09:06.569 fused_ordering(698) 00:09:06.569 fused_ordering(699) 00:09:06.569 fused_ordering(700) 00:09:06.569 fused_ordering(701) 00:09:06.569 fused_ordering(702) 00:09:06.569 fused_ordering(703) 00:09:06.569 fused_ordering(704) 00:09:06.569 fused_ordering(705) 00:09:06.569 fused_ordering(706) 00:09:06.569 fused_ordering(707) 00:09:06.569 fused_ordering(708) 00:09:06.569 fused_ordering(709) 00:09:06.569 fused_ordering(710) 00:09:06.569 fused_ordering(711) 00:09:06.569 fused_ordering(712) 00:09:06.569 fused_ordering(713) 00:09:06.569 fused_ordering(714) 00:09:06.569 fused_ordering(715) 00:09:06.569 fused_ordering(716) 00:09:06.569 fused_ordering(717) 00:09:06.569 fused_ordering(718) 00:09:06.569 fused_ordering(719) 00:09:06.569 fused_ordering(720) 00:09:06.569 fused_ordering(721) 00:09:06.569 fused_ordering(722) 00:09:06.569 fused_ordering(723) 00:09:06.569 fused_ordering(724) 00:09:06.569 fused_ordering(725) 00:09:06.569 fused_ordering(726) 00:09:06.569 fused_ordering(727) 00:09:06.569 fused_ordering(728) 00:09:06.569 fused_ordering(729) 00:09:06.569 fused_ordering(730) 00:09:06.569 fused_ordering(731) 00:09:06.569 fused_ordering(732) 00:09:06.569 fused_ordering(733) 00:09:06.569 fused_ordering(734) 00:09:06.569 fused_ordering(735) 00:09:06.569 fused_ordering(736) 00:09:06.569 fused_ordering(737) 00:09:06.569 fused_ordering(738) 00:09:06.569 fused_ordering(739) 00:09:06.569 fused_ordering(740) 00:09:06.569 fused_ordering(741) 00:09:06.569 fused_ordering(742) 00:09:06.569 fused_ordering(743) 00:09:06.569 fused_ordering(744) 00:09:06.569 fused_ordering(745) 00:09:06.569 fused_ordering(746) 00:09:06.569 fused_ordering(747) 00:09:06.569 fused_ordering(748) 00:09:06.569 fused_ordering(749) 00:09:06.569 fused_ordering(750) 00:09:06.569 fused_ordering(751) 00:09:06.569 fused_ordering(752) 00:09:06.569 fused_ordering(753) 00:09:06.569 fused_ordering(754) 00:09:06.569 fused_ordering(755) 00:09:06.569 fused_ordering(756) 00:09:06.569 fused_ordering(757) 00:09:06.569 fused_ordering(758) 00:09:06.569 fused_ordering(759) 00:09:06.569 fused_ordering(760) 00:09:06.569 fused_ordering(761) 00:09:06.569 fused_ordering(762) 00:09:06.569 fused_ordering(763) 00:09:06.569 fused_ordering(764) 00:09:06.569 fused_ordering(765) 00:09:06.569 fused_ordering(766) 00:09:06.569 fused_ordering(767) 00:09:06.569 fused_ordering(768) 00:09:06.569 fused_ordering(769) 00:09:06.569 fused_ordering(770) 00:09:06.569 fused_ordering(771) 00:09:06.569 fused_ordering(772) 00:09:06.569 fused_ordering(773) 00:09:06.569 fused_ordering(774) 00:09:06.569 fused_ordering(775) 00:09:06.569 fused_ordering(776) 00:09:06.569 fused_ordering(777) 00:09:06.569 fused_ordering(778) 00:09:06.569 fused_ordering(779) 00:09:06.569 fused_ordering(780) 00:09:06.569 fused_ordering(781) 00:09:06.570 fused_ordering(782) 00:09:06.570 fused_ordering(783) 00:09:06.570 fused_ordering(784) 00:09:06.570 fused_ordering(785) 00:09:06.570 fused_ordering(786) 00:09:06.570 fused_ordering(787) 00:09:06.570 fused_ordering(788) 00:09:06.570 fused_ordering(789) 00:09:06.570 fused_ordering(790) 00:09:06.570 fused_ordering(791) 00:09:06.570 fused_ordering(792) 00:09:06.570 fused_ordering(793) 00:09:06.570 fused_ordering(794) 00:09:06.570 fused_ordering(795) 00:09:06.570 fused_ordering(796) 00:09:06.570 fused_ordering(797) 00:09:06.570 fused_ordering(798) 00:09:06.570 fused_ordering(799) 00:09:06.570 fused_ordering(800) 00:09:06.570 fused_ordering(801) 00:09:06.570 fused_ordering(802) 00:09:06.570 fused_ordering(803) 00:09:06.570 fused_ordering(804) 00:09:06.570 fused_ordering(805) 00:09:06.570 fused_ordering(806) 00:09:06.570 fused_ordering(807) 00:09:06.570 fused_ordering(808) 00:09:06.570 fused_ordering(809) 00:09:06.570 fused_ordering(810) 00:09:06.570 fused_ordering(811) 00:09:06.570 fused_ordering(812) 00:09:06.570 fused_ordering(813) 00:09:06.570 fused_ordering(814) 00:09:06.570 fused_ordering(815) 00:09:06.570 fused_ordering(816) 00:09:06.570 fused_ordering(817) 00:09:06.570 fused_ordering(818) 00:09:06.570 fused_ordering(819) 00:09:06.570 fused_ordering(820) 00:09:07.504 fused_ordering(821) 00:09:07.504 fused_ordering(822) 00:09:07.504 fused_ordering(823) 00:09:07.504 fused_ordering(824) 00:09:07.504 fused_ordering(825) 00:09:07.504 fused_ordering(826) 00:09:07.504 fused_ordering(827) 00:09:07.504 fused_ordering(828) 00:09:07.504 fused_ordering(829) 00:09:07.504 fused_ordering(830) 00:09:07.504 fused_ordering(831) 00:09:07.504 fused_ordering(832) 00:09:07.504 fused_ordering(833) 00:09:07.504 fused_ordering(834) 00:09:07.504 fused_ordering(835) 00:09:07.504 fused_ordering(836) 00:09:07.504 fused_ordering(837) 00:09:07.504 fused_ordering(838) 00:09:07.504 fused_ordering(839) 00:09:07.504 fused_ordering(840) 00:09:07.504 fused_ordering(841) 00:09:07.504 fused_ordering(842) 00:09:07.504 fused_ordering(843) 00:09:07.504 fused_ordering(844) 00:09:07.504 fused_ordering(845) 00:09:07.504 fused_ordering(846) 00:09:07.504 fused_ordering(847) 00:09:07.504 fused_ordering(848) 00:09:07.504 fused_ordering(849) 00:09:07.504 fused_ordering(850) 00:09:07.504 fused_ordering(851) 00:09:07.504 fused_ordering(852) 00:09:07.504 fused_ordering(853) 00:09:07.504 fused_ordering(854) 00:09:07.504 fused_ordering(855) 00:09:07.504 fused_ordering(856) 00:09:07.504 fused_ordering(857) 00:09:07.504 fused_ordering(858) 00:09:07.504 fused_ordering(859) 00:09:07.504 fused_ordering(860) 00:09:07.504 fused_ordering(861) 00:09:07.504 fused_ordering(862) 00:09:07.504 fused_ordering(863) 00:09:07.504 fused_ordering(864) 00:09:07.504 fused_ordering(865) 00:09:07.504 fused_ordering(866) 00:09:07.504 fused_ordering(867) 00:09:07.504 fused_ordering(868) 00:09:07.504 fused_ordering(869) 00:09:07.504 fused_ordering(870) 00:09:07.504 fused_ordering(871) 00:09:07.504 fused_ordering(872) 00:09:07.504 fused_ordering(873) 00:09:07.504 fused_ordering(874) 00:09:07.504 fused_ordering(875) 00:09:07.504 fused_ordering(876) 00:09:07.504 fused_ordering(877) 00:09:07.504 fused_ordering(878) 00:09:07.504 fused_ordering(879) 00:09:07.504 fused_ordering(880) 00:09:07.504 fused_ordering(881) 00:09:07.504 fused_ordering(882) 00:09:07.504 fused_ordering(883) 00:09:07.504 fused_ordering(884) 00:09:07.504 fused_ordering(885) 00:09:07.504 fused_ordering(886) 00:09:07.504 fused_ordering(887) 00:09:07.504 fused_ordering(888) 00:09:07.504 fused_ordering(889) 00:09:07.504 fused_ordering(890) 00:09:07.504 fused_ordering(891) 00:09:07.504 fused_ordering(892) 00:09:07.504 fused_ordering(893) 00:09:07.504 fused_ordering(894) 00:09:07.504 fused_ordering(895) 00:09:07.504 fused_ordering(896) 00:09:07.504 fused_ordering(897) 00:09:07.504 fused_ordering(898) 00:09:07.504 fused_ordering(899) 00:09:07.504 fused_ordering(900) 00:09:07.504 fused_ordering(901) 00:09:07.504 fused_ordering(902) 00:09:07.504 fused_ordering(903) 00:09:07.504 fused_ordering(904) 00:09:07.504 fused_ordering(905) 00:09:07.504 fused_ordering(906) 00:09:07.504 fused_ordering(907) 00:09:07.504 fused_ordering(908) 00:09:07.504 fused_ordering(909) 00:09:07.504 fused_ordering(910) 00:09:07.504 fused_ordering(911) 00:09:07.504 fused_ordering(912) 00:09:07.504 fused_ordering(913) 00:09:07.504 fused_ordering(914) 00:09:07.504 fused_ordering(915) 00:09:07.504 fused_ordering(916) 00:09:07.504 fused_ordering(917) 00:09:07.504 fused_ordering(918) 00:09:07.504 fused_ordering(919) 00:09:07.504 fused_ordering(920) 00:09:07.504 fused_ordering(921) 00:09:07.504 fused_ordering(922) 00:09:07.505 fused_ordering(923) 00:09:07.505 fused_ordering(924) 00:09:07.505 fused_ordering(925) 00:09:07.505 fused_ordering(926) 00:09:07.505 fused_ordering(927) 00:09:07.505 fused_ordering(928) 00:09:07.505 fused_ordering(929) 00:09:07.505 fused_ordering(930) 00:09:07.505 fused_ordering(931) 00:09:07.505 fused_ordering(932) 00:09:07.505 fused_ordering(933) 00:09:07.505 fused_ordering(934) 00:09:07.505 fused_ordering(935) 00:09:07.505 fused_ordering(936) 00:09:07.505 fused_ordering(937) 00:09:07.505 fused_ordering(938) 00:09:07.505 fused_ordering(939) 00:09:07.505 fused_ordering(940) 00:09:07.505 fused_ordering(941) 00:09:07.505 fused_ordering(942) 00:09:07.505 fused_ordering(943) 00:09:07.505 fused_ordering(944) 00:09:07.505 fused_ordering(945) 00:09:07.505 fused_ordering(946) 00:09:07.505 fused_ordering(947) 00:09:07.505 fused_ordering(948) 00:09:07.505 fused_ordering(949) 00:09:07.505 fused_ordering(950) 00:09:07.505 fused_ordering(951) 00:09:07.505 fused_ordering(952) 00:09:07.505 fused_ordering(953) 00:09:07.505 fused_ordering(954) 00:09:07.505 fused_ordering(955) 00:09:07.505 fused_ordering(956) 00:09:07.505 fused_ordering(957) 00:09:07.505 fused_ordering(958) 00:09:07.505 fused_ordering(959) 00:09:07.505 fused_ordering(960) 00:09:07.505 fused_ordering(961) 00:09:07.505 fused_ordering(962) 00:09:07.505 fused_ordering(963) 00:09:07.505 fused_ordering(964) 00:09:07.505 fused_ordering(965) 00:09:07.505 fused_ordering(966) 00:09:07.505 fused_ordering(967) 00:09:07.505 fused_ordering(968) 00:09:07.505 fused_ordering(969) 00:09:07.505 fused_ordering(970) 00:09:07.505 fused_ordering(971) 00:09:07.505 fused_ordering(972) 00:09:07.505 fused_ordering(973) 00:09:07.505 fused_ordering(974) 00:09:07.505 fused_ordering(975) 00:09:07.505 fused_ordering(976) 00:09:07.505 fused_ordering(977) 00:09:07.505 fused_ordering(978) 00:09:07.505 fused_ordering(979) 00:09:07.505 fused_ordering(980) 00:09:07.505 fused_ordering(981) 00:09:07.505 fused_ordering(982) 00:09:07.505 fused_ordering(983) 00:09:07.505 fused_ordering(984) 00:09:07.505 fused_ordering(985) 00:09:07.505 fused_ordering(986) 00:09:07.505 fused_ordering(987) 00:09:07.505 fused_ordering(988) 00:09:07.505 fused_ordering(989) 00:09:07.505 fused_ordering(990) 00:09:07.505 fused_ordering(991) 00:09:07.505 fused_ordering(992) 00:09:07.505 fused_ordering(993) 00:09:07.505 fused_ordering(994) 00:09:07.505 fused_ordering(995) 00:09:07.505 fused_ordering(996) 00:09:07.505 fused_ordering(997) 00:09:07.505 fused_ordering(998) 00:09:07.505 fused_ordering(999) 00:09:07.505 fused_ordering(1000) 00:09:07.505 fused_ordering(1001) 00:09:07.505 fused_ordering(1002) 00:09:07.505 fused_ordering(1003) 00:09:07.505 fused_ordering(1004) 00:09:07.505 fused_ordering(1005) 00:09:07.505 fused_ordering(1006) 00:09:07.505 fused_ordering(1007) 00:09:07.505 fused_ordering(1008) 00:09:07.505 fused_ordering(1009) 00:09:07.505 fused_ordering(1010) 00:09:07.505 fused_ordering(1011) 00:09:07.505 fused_ordering(1012) 00:09:07.505 fused_ordering(1013) 00:09:07.505 fused_ordering(1014) 00:09:07.505 fused_ordering(1015) 00:09:07.505 fused_ordering(1016) 00:09:07.505 fused_ordering(1017) 00:09:07.505 fused_ordering(1018) 00:09:07.505 fused_ordering(1019) 00:09:07.505 fused_ordering(1020) 00:09:07.505 fused_ordering(1021) 00:09:07.505 fused_ordering(1022) 00:09:07.505 fused_ordering(1023) 00:09:07.505 00:56:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:09:07.505 00:56:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:09:07.505 00:56:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:07.505 00:56:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:09:07.505 00:56:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:07.505 00:56:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:09:07.505 00:56:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:07.505 00:56:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:07.505 rmmod nvme_tcp 00:09:07.505 rmmod nvme_fabrics 00:09:07.505 rmmod nvme_keyring 00:09:07.505 00:56:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:07.505 00:56:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:09:07.505 00:56:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:09:07.505 00:56:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1186404 ']' 00:09:07.505 00:56:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1186404 00:09:07.505 00:56:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 1186404 ']' 00:09:07.505 00:56:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 1186404 00:09:07.505 00:56:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:09:07.505 00:56:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:07.505 00:56:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1186404 00:09:07.505 00:56:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:09:07.505 00:56:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:09:07.505 00:56:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1186404' 00:09:07.505 killing process with pid 1186404 00:09:07.505 00:56:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 1186404 00:09:07.505 [2024-05-15 00:56:19.737778] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:07.505 00:56:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 1186404 00:09:07.764 00:56:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:07.764 00:56:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:07.764 00:56:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:07.764 00:56:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:07.764 00:56:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:07.764 00:56:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.764 00:56:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:07.764 00:56:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.668 00:56:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:09.668 00:09:09.668 real 0m10.072s 00:09:09.668 user 0m7.657s 00:09:09.668 sys 0m4.799s 00:09:09.668 00:56:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:09.668 00:56:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:09.668 ************************************ 00:09:09.668 END TEST nvmf_fused_ordering 00:09:09.668 ************************************ 00:09:09.927 00:56:22 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:09.927 00:56:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:09.927 00:56:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:09.927 00:56:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:09.927 ************************************ 00:09:09.927 START TEST nvmf_delete_subsystem 00:09:09.927 ************************************ 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:09.927 * Looking for test storage... 00:09:09.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:09.927 00:56:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:12.459 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:12.459 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:12.459 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:12.459 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:12.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:09:12.459 00:09:12.459 --- 10.0.0.2 ping statistics --- 00:09:12.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.459 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:12.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:09:12.459 00:09:12.459 --- 10.0.0.1 ping statistics --- 00:09:12.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.459 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:12.459 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.460 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:12.460 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:12.460 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:12.460 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:12.460 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:12.460 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:12.460 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1189303 00:09:12.460 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:12.460 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1189303 00:09:12.460 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 1189303 ']' 00:09:12.460 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.460 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:12.460 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.460 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:12.460 00:56:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:12.739 [2024-05-15 00:56:24.876629] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:09:12.739 [2024-05-15 00:56:24.876706] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.739 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.739 [2024-05-15 00:56:24.957239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:12.739 [2024-05-15 00:56:25.073024] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.739 [2024-05-15 00:56:25.073086] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.739 [2024-05-15 00:56:25.073102] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.739 [2024-05-15 00:56:25.073115] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.739 [2024-05-15 00:56:25.073127] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.739 [2024-05-15 00:56:25.073226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.739 [2024-05-15 00:56:25.073233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:13.693 [2024-05-15 00:56:25.864355] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:13.693 [2024-05-15 00:56:25.880408] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:13.693 [2024-05-15 00:56:25.880699] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:13.693 NULL1 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:13.693 Delay0 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1189459 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:13.693 00:56:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:13.693 EAL: No free 2048 kB hugepages reported on node 1 00:09:13.693 [2024-05-15 00:56:25.955384] subsystem.c:1520:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:15.591 00:56:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:15.591 00:56:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.591 00:56:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Write completed with error (sct=0, sc=8) 00:09:15.849 starting I/O failed: -6 00:09:15.849 Write completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 starting I/O failed: -6 00:09:15.849 Write completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 starting I/O failed: -6 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Write completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 starting I/O failed: -6 00:09:15.849 Write completed with error (sct=0, sc=8) 00:09:15.849 Write completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 starting I/O failed: -6 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 starting I/O failed: -6 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Write completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 starting I/O failed: -6 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Write completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 starting I/O failed: -6 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Write completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Write completed with error (sct=0, sc=8) 00:09:15.849 starting I/O failed: -6 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 [2024-05-15 00:56:28.046897] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa810000c00 is same with the state(5) to be set 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Write completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Write completed with error (sct=0, sc=8) 00:09:15.849 Write completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Write completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Write completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 starting I/O failed: -6 00:09:15.849 Write completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Write completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 Write completed with error (sct=0, sc=8) 00:09:15.849 Write completed with error (sct=0, sc=8) 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.849 starting I/O failed: -6 00:09:15.849 Read completed with error (sct=0, sc=8) 00:09:15.850 Write completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Write completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 starting I/O failed: -6 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Write completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 starting I/O failed: -6 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Write completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Write completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 starting I/O failed: -6 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Write completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Write completed with error (sct=0, sc=8) 00:09:15.850 Write completed with error (sct=0, sc=8) 00:09:15.850 starting I/O failed: -6 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Write completed with error (sct=0, sc=8) 00:09:15.850 Write completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 starting I/O failed: -6 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Write completed with error (sct=0, sc=8) 00:09:15.850 Write completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 starting I/O failed: -6 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Write completed with error (sct=0, sc=8) 00:09:15.850 Write completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Write completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Write completed with error (sct=0, sc=8) 00:09:15.850 starting I/O failed: -6 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Write completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Write completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 starting I/O failed: -6 00:09:15.850 Write completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Write completed with error (sct=0, sc=8) 00:09:15.850 Write completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Write completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Write completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Write completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 starting I/O failed: -6 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Write completed with error (sct=0, sc=8) 00:09:15.850 Write completed with error (sct=0, sc=8) 00:09:15.850 Write completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 starting I/O failed: -6 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Write completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 Read completed with error (sct=0, sc=8) 00:09:15.850 [2024-05-15 00:56:28.047784] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e1880 is same with the state(5) to be set 00:09:15.850 Write completed with error (sct=0, sc=8) 00:09:15.850 Write completed with error (sct=0, sc=8) 00:09:16.785 [2024-05-15 00:56:29.013559] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16007f0 is same with the state(5) to be set 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Write completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Write completed with error (sct=0, sc=8) 00:09:16.785 Write completed with error (sct=0, sc=8) 00:09:16.785 Write completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 [2024-05-15 00:56:29.049100] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa81000c2f0 is same with the state(5) to be set 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Write completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Write completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Write completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Write completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Write completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Write completed with error (sct=0, sc=8) 00:09:16.785 Write completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 [2024-05-15 00:56:29.049506] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e0e10 is same with the state(5) to be set 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Write completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Write completed with error (sct=0, sc=8) 00:09:16.785 Write completed with error (sct=0, sc=8) 00:09:16.785 Write completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Read completed with error (sct=0, sc=8) 00:09:16.785 Write completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Write completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Write completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Write completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Write completed with error (sct=0, sc=8) 00:09:16.786 Write completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Write completed with error (sct=0, sc=8) 00:09:16.786 Write completed with error (sct=0, sc=8) 00:09:16.786 Write completed with error (sct=0, sc=8) 00:09:16.786 Write completed with error (sct=0, sc=8) 00:09:16.786 Write completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 [2024-05-15 00:56:29.050004] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e1a60 is same with the state(5) to be set 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Write completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Write completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Write completed with error (sct=0, sc=8) 00:09:16.786 Write completed with error (sct=0, sc=8) 00:09:16.786 Write completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Write completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Write completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Read completed with error (sct=0, sc=8) 00:09:16.786 Write completed with error (sct=0, sc=8) 00:09:16.786 [2024-05-15 00:56:29.050253] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1607790 is same with the state(5) to be set 00:09:16.786 Initializing NVMe Controllers 00:09:16.786 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:16.786 Controller IO queue size 128, less than required. 00:09:16.786 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:16.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:16.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:16.786 Initialization complete. Launching workers. 00:09:16.786 ======================================================== 00:09:16.786 Latency(us) 00:09:16.786 Device Information : IOPS MiB/s Average min max 00:09:16.786 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 174.11 0.09 1065530.49 1142.98 2002803.34 00:09:16.786 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 147.32 0.07 932618.92 488.75 2001719.78 00:09:16.786 ======================================================== 00:09:16.786 Total : 321.43 0.16 1004612.68 488.75 2002803.34 00:09:16.786 00:09:16.786 [2024-05-15 00:56:29.050984] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16007f0 (9): Bad file descriptor 00:09:16.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:16.786 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.786 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:16.786 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1189459 00:09:16.786 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:17.352 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:17.352 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1189459 00:09:17.353 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1189459) - No such process 00:09:17.353 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1189459 00:09:17.353 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:09:17.353 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1189459 00:09:17.353 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:09:17.353 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:17.353 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:09:17.353 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:17.353 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1189459 00:09:17.353 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:09:17.353 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:17.353 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:17.353 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:17.353 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:17.353 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.353 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:17.353 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.353 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.353 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.353 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:17.353 [2024-05-15 00:56:29.570071] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.353 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.353 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:17.353 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.353 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:17.353 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.353 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1189870 00:09:17.353 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:17.353 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:17.353 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1189870 00:09:17.353 00:56:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:17.353 EAL: No free 2048 kB hugepages reported on node 1 00:09:17.353 [2024-05-15 00:56:29.630940] subsystem.c:1520:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:17.922 00:56:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:17.922 00:56:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1189870 00:09:17.922 00:56:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:18.488 00:56:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:18.488 00:56:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1189870 00:09:18.488 00:56:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:18.745 00:56:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:18.745 00:56:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1189870 00:09:18.745 00:56:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:19.310 00:56:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:19.310 00:56:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1189870 00:09:19.310 00:56:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:19.876 00:56:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:19.876 00:56:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1189870 00:09:19.876 00:56:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:20.441 00:56:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:20.441 00:56:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1189870 00:09:20.441 00:56:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:20.441 Initializing NVMe Controllers 00:09:20.441 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:20.441 Controller IO queue size 128, less than required. 00:09:20.441 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:20.441 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:20.441 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:20.441 Initialization complete. Launching workers. 00:09:20.441 ======================================================== 00:09:20.441 Latency(us) 00:09:20.441 Device Information : IOPS MiB/s Average min max 00:09:20.441 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004628.78 1000248.00 1041461.42 00:09:20.441 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005117.79 1000223.03 1011466.12 00:09:20.441 ======================================================== 00:09:20.441 Total : 256.00 0.12 1004873.28 1000223.03 1041461.42 00:09:20.441 00:09:21.008 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:21.008 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1189870 00:09:21.008 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1189870) - No such process 00:09:21.008 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1189870 00:09:21.008 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:21.008 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:21.008 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:21.008 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:21.008 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:21.008 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:21.008 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:21.008 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:21.008 rmmod nvme_tcp 00:09:21.008 rmmod nvme_fabrics 00:09:21.008 rmmod nvme_keyring 00:09:21.008 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:21.008 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:21.008 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:21.008 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1189303 ']' 00:09:21.008 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1189303 00:09:21.008 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 1189303 ']' 00:09:21.008 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 1189303 00:09:21.008 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:09:21.008 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:21.008 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1189303 00:09:21.008 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:21.008 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:21.008 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1189303' 00:09:21.008 killing process with pid 1189303 00:09:21.008 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 1189303 00:09:21.008 [2024-05-15 00:56:33.195035] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:21.008 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 1189303 00:09:21.266 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:21.266 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:21.266 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:21.266 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:21.266 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:21.266 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.266 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:21.266 00:56:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.169 00:56:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:23.169 00:09:23.169 real 0m13.408s 00:09:23.169 user 0m29.290s 00:09:23.169 sys 0m3.310s 00:09:23.169 00:56:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:23.169 00:56:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:23.169 ************************************ 00:09:23.169 END TEST nvmf_delete_subsystem 00:09:23.169 ************************************ 00:09:23.169 00:56:35 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:23.169 00:56:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:23.169 00:56:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:23.169 00:56:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:23.428 ************************************ 00:09:23.428 START TEST nvmf_ns_masking 00:09:23.428 ************************************ 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:23.428 * Looking for test storage... 00:09:23.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=e328b6af-1447-4836-8b3c-85e6fd54157d 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:09:23.428 00:56:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:25.957 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:25.957 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:25.957 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.957 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:25.958 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:25.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:25.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:09:25.958 00:09:25.958 --- 10.0.0.2 ping statistics --- 00:09:25.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.958 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:25.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:25.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:09:25.958 00:09:25.958 --- 10.0.0.1 ping statistics --- 00:09:25.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.958 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1192619 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1192619 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 1192619 ']' 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:25.958 00:56:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:25.958 [2024-05-15 00:56:38.291569] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:09:25.958 [2024-05-15 00:56:38.291660] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.958 EAL: No free 2048 kB hugepages reported on node 1 00:09:26.216 [2024-05-15 00:56:38.381218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:26.216 [2024-05-15 00:56:38.506155] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:26.216 [2024-05-15 00:56:38.506226] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:26.216 [2024-05-15 00:56:38.506243] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:26.216 [2024-05-15 00:56:38.506256] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:26.216 [2024-05-15 00:56:38.506267] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:26.216 [2024-05-15 00:56:38.506333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.216 [2024-05-15 00:56:38.506387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:26.216 [2024-05-15 00:56:38.506501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:26.216 [2024-05-15 00:56:38.506504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.473 00:56:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:26.473 00:56:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:09:26.473 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:26.473 00:56:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:26.473 00:56:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:26.473 00:56:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:26.473 00:56:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:26.730 [2024-05-15 00:56:38.892624] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:26.730 00:56:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:09:26.730 00:56:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:09:26.730 00:56:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:26.988 Malloc1 00:09:26.988 00:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:27.246 Malloc2 00:09:27.246 00:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:27.503 00:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:27.760 00:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.019 [2024-05-15 00:56:40.180325] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:28.019 [2024-05-15 00:56:40.180663] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.019 00:56:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:09:28.019 00:56:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e328b6af-1447-4836-8b3c-85e6fd54157d -a 10.0.0.2 -s 4420 -i 4 00:09:28.019 00:56:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:09:28.019 00:56:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:09:28.019 00:56:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:28.019 00:56:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:28.019 00:56:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:09:29.960 00:56:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:29.960 00:56:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:29.960 00:56:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:29.960 00:56:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:29.960 00:56:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:29.960 00:56:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:09:29.960 00:56:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:29.960 00:56:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:30.218 00:56:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:30.218 00:56:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:30.218 00:56:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:09:30.218 00:56:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:30.218 00:56:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:30.218 [ 0]:0x1 00:09:30.218 00:56:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:30.219 00:56:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:30.219 00:56:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9a298bb3cf7b46e59bd58c8c1232cc9a 00:09:30.219 00:56:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9a298bb3cf7b46e59bd58c8c1232cc9a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:30.219 00:56:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:30.477 00:56:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:09:30.477 00:56:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:30.477 00:56:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:30.477 [ 0]:0x1 00:09:30.477 00:56:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:30.477 00:56:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:30.477 00:56:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9a298bb3cf7b46e59bd58c8c1232cc9a 00:09:30.477 00:56:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9a298bb3cf7b46e59bd58c8c1232cc9a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:30.477 00:56:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:09:30.477 00:56:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:30.477 00:56:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:30.477 [ 1]:0x2 00:09:30.477 00:56:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:30.477 00:56:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:30.477 00:56:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5d0cd6ec15a54e009f8ed59024779c31 00:09:30.477 00:56:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5d0cd6ec15a54e009f8ed59024779c31 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:30.477 00:56:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:09:30.477 00:56:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:30.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.734 00:56:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.992 00:56:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:31.250 00:56:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:09:31.250 00:56:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e328b6af-1447-4836-8b3c-85e6fd54157d -a 10.0.0.2 -s 4420 -i 4 00:09:31.250 00:56:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:31.250 00:56:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:09:31.250 00:56:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:31.250 00:56:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:09:31.250 00:56:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:09:31.250 00:56:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:33.779 [ 0]:0x2 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5d0cd6ec15a54e009f8ed59024779c31 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5d0cd6ec15a54e009f8ed59024779c31 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:33.779 [ 0]:0x1 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9a298bb3cf7b46e59bd58c8c1232cc9a 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9a298bb3cf7b46e59bd58c8c1232cc9a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:33.779 [ 1]:0x2 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:33.779 00:56:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:33.779 00:56:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5d0cd6ec15a54e009f8ed59024779c31 00:09:33.779 00:56:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5d0cd6ec15a54e009f8ed59024779c31 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:33.779 00:56:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:34.037 00:56:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:09:34.037 00:56:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:34.037 00:56:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:34.037 00:56:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:34.037 00:56:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:34.037 00:56:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:34.037 00:56:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:34.037 00:56:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:34.037 00:56:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:34.037 00:56:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:34.037 00:56:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:34.037 00:56:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:34.037 00:56:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:34.037 00:56:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:34.037 00:56:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:34.037 00:56:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:34.037 00:56:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:34.037 00:56:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:34.037 00:56:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:09:34.037 00:56:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:34.037 00:56:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:34.037 [ 0]:0x2 00:09:34.037 00:56:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:34.037 00:56:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:34.037 00:56:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5d0cd6ec15a54e009f8ed59024779c31 00:09:34.037 00:56:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5d0cd6ec15a54e009f8ed59024779c31 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:34.037 00:56:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:09:34.037 00:56:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:34.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.037 00:56:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:34.295 00:56:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:09:34.295 00:56:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e328b6af-1447-4836-8b3c-85e6fd54157d -a 10.0.0.2 -s 4420 -i 4 00:09:34.554 00:56:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:34.554 00:56:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:09:34.554 00:56:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:34.554 00:56:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:09:34.554 00:56:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:09:34.554 00:56:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:09:36.453 00:56:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:36.453 00:56:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:36.453 00:56:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:36.453 00:56:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:09:36.453 00:56:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:36.453 00:56:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:09:36.453 00:56:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:36.453 00:56:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:36.710 00:56:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:36.710 00:56:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:36.710 00:56:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:09:36.710 00:56:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:36.710 00:56:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:36.710 [ 0]:0x1 00:09:36.710 00:56:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:36.710 00:56:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:36.710 00:56:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9a298bb3cf7b46e59bd58c8c1232cc9a 00:09:36.710 00:56:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9a298bb3cf7b46e59bd58c8c1232cc9a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:36.710 00:56:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:09:36.711 00:56:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:36.711 00:56:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:36.711 [ 1]:0x2 00:09:36.711 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:36.711 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:36.711 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5d0cd6ec15a54e009f8ed59024779c31 00:09:36.711 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5d0cd6ec15a54e009f8ed59024779c31 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:36.711 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:37.277 [ 0]:0x2 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5d0cd6ec15a54e009f8ed59024779c31 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5d0cd6ec15a54e009f8ed59024779c31 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:37.277 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:37.536 [2024-05-15 00:56:49.686708] nvmf_rpc.c:1776:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:37.536 request: 00:09:37.536 { 00:09:37.536 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:37.536 "nsid": 2, 00:09:37.536 "host": "nqn.2016-06.io.spdk:host1", 00:09:37.536 "method": "nvmf_ns_remove_host", 00:09:37.536 "req_id": 1 00:09:37.536 } 00:09:37.536 Got JSON-RPC error response 00:09:37.536 response: 00:09:37.536 { 00:09:37.536 "code": -32602, 00:09:37.536 "message": "Invalid parameters" 00:09:37.536 } 00:09:37.536 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:37.536 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:37.536 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:37.536 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:37.536 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:09:37.536 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:37.536 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:37.536 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:37.536 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:37.536 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:37.536 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:37.536 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:37.536 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:37.536 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:37.536 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:37.536 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:37.536 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:37.536 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:37.536 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:37.536 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:37.536 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:37.536 00:56:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:37.536 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:09:37.536 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:37.536 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:37.536 [ 0]:0x2 00:09:37.536 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:37.536 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:37.536 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5d0cd6ec15a54e009f8ed59024779c31 00:09:37.536 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5d0cd6ec15a54e009f8ed59024779c31 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:37.536 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:09:37.536 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:37.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.794 00:56:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:38.051 00:56:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:09:38.051 00:56:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:09:38.051 00:56:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:38.051 00:56:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:09:38.051 00:56:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:38.051 00:56:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:09:38.051 00:56:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:38.051 00:56:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:38.051 rmmod nvme_tcp 00:09:38.051 rmmod nvme_fabrics 00:09:38.051 rmmod nvme_keyring 00:09:38.051 00:56:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:38.051 00:56:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:09:38.051 00:56:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:09:38.051 00:56:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1192619 ']' 00:09:38.051 00:56:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1192619 00:09:38.051 00:56:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 1192619 ']' 00:09:38.051 00:56:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 1192619 00:09:38.051 00:56:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:09:38.051 00:56:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:38.051 00:56:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1192619 00:09:38.051 00:56:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:38.051 00:56:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:38.051 00:56:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1192619' 00:09:38.051 killing process with pid 1192619 00:09:38.051 00:56:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 1192619 00:09:38.051 [2024-05-15 00:56:50.295884] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:38.051 00:56:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 1192619 00:09:38.308 00:56:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:38.308 00:56:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:38.308 00:56:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:38.308 00:56:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:38.308 00:56:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:38.308 00:56:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.308 00:56:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:38.308 00:56:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.845 00:56:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:40.845 00:09:40.845 real 0m17.101s 00:09:40.845 user 0m51.923s 00:09:40.845 sys 0m3.927s 00:09:40.845 00:56:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:40.845 00:56:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:40.845 ************************************ 00:09:40.845 END TEST nvmf_ns_masking 00:09:40.845 ************************************ 00:09:40.845 00:56:52 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:09:40.845 00:56:52 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:40.845 00:56:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:40.845 00:56:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:40.845 00:56:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:40.845 ************************************ 00:09:40.845 START TEST nvmf_nvme_cli 00:09:40.845 ************************************ 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:40.845 * Looking for test storage... 00:09:40.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:09:40.845 00:56:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.372 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:43.372 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:09:43.372 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:43.372 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:43.372 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:43.372 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:43.372 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:43.372 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:09:43.372 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:43.372 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:09:43.372 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:09:43.372 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:09:43.372 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:09:43.372 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:09:43.372 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:09:43.372 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:43.372 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:43.372 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:43.372 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:43.372 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:43.372 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:43.372 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:43.372 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:43.372 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:43.372 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:43.372 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:43.372 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:43.372 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:43.373 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:43.373 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:43.373 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:43.373 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:43.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:43.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:09:43.373 00:09:43.373 --- 10.0.0.2 ping statistics --- 00:09:43.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.373 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:43.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:43.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:09:43.373 00:09:43.373 --- 10.0.0.1 ping statistics --- 00:09:43.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.373 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1196474 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1196474 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 1196474 ']' 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:43.373 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.373 [2024-05-15 00:56:55.443202] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:09:43.373 [2024-05-15 00:56:55.443293] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.373 EAL: No free 2048 kB hugepages reported on node 1 00:09:43.373 [2024-05-15 00:56:55.533242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:43.373 [2024-05-15 00:56:55.659290] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:43.373 [2024-05-15 00:56:55.659353] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:43.373 [2024-05-15 00:56:55.659369] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:43.373 [2024-05-15 00:56:55.659383] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:43.374 [2024-05-15 00:56:55.659394] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:43.374 [2024-05-15 00:56:55.659477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.374 [2024-05-15 00:56:55.659535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:43.374 [2024-05-15 00:56:55.659563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:43.374 [2024-05-15 00:56:55.659566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.632 [2024-05-15 00:56:55.826821] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.632 Malloc0 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.632 Malloc1 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.632 [2024-05-15 00:56:55.908387] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:43.632 [2024-05-15 00:56:55.908698] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.632 00:56:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:09:43.890 00:09:43.890 Discovery Log Number of Records 2, Generation counter 2 00:09:43.890 =====Discovery Log Entry 0====== 00:09:43.890 trtype: tcp 00:09:43.890 adrfam: ipv4 00:09:43.890 subtype: current discovery subsystem 00:09:43.890 treq: not required 00:09:43.890 portid: 0 00:09:43.890 trsvcid: 4420 00:09:43.890 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:43.890 traddr: 10.0.0.2 00:09:43.890 eflags: explicit discovery connections, duplicate discovery information 00:09:43.890 sectype: none 00:09:43.890 =====Discovery Log Entry 1====== 00:09:43.890 trtype: tcp 00:09:43.890 adrfam: ipv4 00:09:43.890 subtype: nvme subsystem 00:09:43.890 treq: not required 00:09:43.890 portid: 0 00:09:43.890 trsvcid: 4420 00:09:43.890 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:43.890 traddr: 10.0.0.2 00:09:43.890 eflags: none 00:09:43.890 sectype: none 00:09:43.890 00:56:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:09:43.890 00:56:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:09:43.890 00:56:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:43.890 00:56:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:43.890 00:56:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:43.890 00:56:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:43.890 00:56:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:43.890 00:56:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:43.890 00:56:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:43.890 00:56:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:09:43.890 00:56:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:44.455 00:56:56 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:44.455 00:56:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:09:44.455 00:56:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:44.455 00:56:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:09:44.455 00:56:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:09:44.455 00:56:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:09:46.413 00:56:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:46.413 00:56:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:46.413 00:56:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:46.413 00:56:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:09:46.413 00:56:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:46.413 00:56:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:09:46.413 00:56:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:09:46.413 00:56:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:46.413 00:56:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.413 00:56:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:46.672 00:56:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:46.672 00:56:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.672 00:56:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:46.672 00:56:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.672 00:56:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:46.672 00:56:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:46.672 00:56:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.672 00:56:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:46.672 00:56:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:46.672 00:56:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.672 00:56:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:09:46.672 /dev/nvme0n1 ]] 00:09:46.672 00:56:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:09:46.672 00:56:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:09:46.672 00:56:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:46.672 00:56:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.672 00:56:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:46.672 00:56:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:46.672 00:56:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.672 00:56:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:46.672 00:56:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.672 00:56:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:46.672 00:56:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:46.672 00:56:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.672 00:56:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:46.672 00:56:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:46.672 00:56:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.672 00:56:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:09:46.672 00:56:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:46.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.930 00:56:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:46.930 00:56:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:09:46.930 00:56:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:46.930 00:56:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:46.930 00:56:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:46.930 00:56:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:46.930 00:56:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:09:46.930 00:56:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:09:46.930 00:56:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:46.930 00:56:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.930 00:56:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:46.930 00:56:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.930 00:56:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:46.930 00:56:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:09:46.930 00:56:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:46.930 00:56:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:09:46.930 00:56:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:46.930 00:56:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:09:46.930 00:56:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:46.930 00:56:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:46.930 rmmod nvme_tcp 00:09:46.930 rmmod nvme_fabrics 00:09:46.930 rmmod nvme_keyring 00:09:47.189 00:56:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:47.189 00:56:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:09:47.189 00:56:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:09:47.189 00:56:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1196474 ']' 00:09:47.189 00:56:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1196474 00:09:47.189 00:56:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 1196474 ']' 00:09:47.189 00:56:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 1196474 00:09:47.189 00:56:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:09:47.189 00:56:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:47.189 00:56:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1196474 00:09:47.189 00:56:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:47.189 00:56:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:47.189 00:56:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1196474' 00:09:47.189 killing process with pid 1196474 00:09:47.189 00:56:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 1196474 00:09:47.189 [2024-05-15 00:56:59.357185] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:47.189 00:56:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 1196474 00:09:47.447 00:56:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:47.447 00:56:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:47.447 00:56:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:47.447 00:56:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:47.447 00:56:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:47.447 00:56:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.447 00:56:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:47.447 00:56:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.351 00:57:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:49.351 00:09:49.351 real 0m9.009s 00:09:49.351 user 0m16.520s 00:09:49.351 sys 0m2.509s 00:09:49.351 00:57:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:49.351 00:57:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:49.351 ************************************ 00:09:49.351 END TEST nvmf_nvme_cli 00:09:49.351 ************************************ 00:09:49.610 00:57:01 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:09:49.610 00:57:01 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:49.610 00:57:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:49.610 00:57:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:49.610 00:57:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:49.610 ************************************ 00:09:49.610 START TEST nvmf_vfio_user 00:09:49.610 ************************************ 00:09:49.610 00:57:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:49.610 * Looking for test storage... 00:09:49.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:49.610 00:57:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:49.610 00:57:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:09:49.610 00:57:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:49.610 00:57:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:49.610 00:57:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:49.610 00:57:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:49.610 00:57:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:49.610 00:57:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:49.610 00:57:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:49.610 00:57:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:49.610 00:57:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:49.610 00:57:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1197406 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1197406' 00:09:49.611 Process pid: 1197406 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1197406 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 1197406 ']' 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:49.611 00:57:01 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:09:49.611 [2024-05-15 00:57:01.921375] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:09:49.611 [2024-05-15 00:57:01.921466] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.611 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.611 [2024-05-15 00:57:01.992431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:49.869 [2024-05-15 00:57:02.103913] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.869 [2024-05-15 00:57:02.103984] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.869 [2024-05-15 00:57:02.104013] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.869 [2024-05-15 00:57:02.104025] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.869 [2024-05-15 00:57:02.104035] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.869 [2024-05-15 00:57:02.104104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.869 [2024-05-15 00:57:02.104126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:49.869 [2024-05-15 00:57:02.104174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:49.869 [2024-05-15 00:57:02.104177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.869 00:57:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:49.869 00:57:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:09:49.869 00:57:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:09:51.239 00:57:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:09:51.239 00:57:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:09:51.239 00:57:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:09:51.239 00:57:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:51.239 00:57:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:09:51.239 00:57:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:51.496 Malloc1 00:09:51.496 00:57:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:09:51.753 00:57:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:09:52.010 00:57:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:09:52.266 [2024-05-15 00:57:04.583817] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:52.266 00:57:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:52.266 00:57:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:09:52.266 00:57:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:52.524 Malloc2 00:09:52.524 00:57:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:09:52.782 00:57:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:09:53.040 00:57:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:09:53.299 00:57:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:09:53.299 00:57:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:09:53.299 00:57:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:53.299 00:57:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:53.299 00:57:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:09:53.299 00:57:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:53.299 [2024-05-15 00:57:05.599542] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:09:53.299 [2024-05-15 00:57:05.599594] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197939 ] 00:09:53.299 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.299 [2024-05-15 00:57:05.633483] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:09:53.299 [2024-05-15 00:57:05.639398] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:53.299 [2024-05-15 00:57:05.639429] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe984fa0000 00:09:53.299 [2024-05-15 00:57:05.640390] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:53.299 [2024-05-15 00:57:05.641384] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:53.299 [2024-05-15 00:57:05.642392] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:53.299 [2024-05-15 00:57:05.643392] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:53.299 [2024-05-15 00:57:05.644397] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:53.299 [2024-05-15 00:57:05.645405] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:53.299 [2024-05-15 00:57:05.646412] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:53.299 [2024-05-15 00:57:05.647419] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:53.299 [2024-05-15 00:57:05.648425] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:53.299 [2024-05-15 00:57:05.648449] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe984f95000 00:09:53.299 [2024-05-15 00:57:05.649605] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:53.299 [2024-05-15 00:57:05.663847] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:09:53.299 [2024-05-15 00:57:05.663885] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:09:53.299 [2024-05-15 00:57:05.672606] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:53.299 [2024-05-15 00:57:05.672670] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:53.299 [2024-05-15 00:57:05.672785] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:09:53.299 [2024-05-15 00:57:05.672810] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:09:53.299 [2024-05-15 00:57:05.672820] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:09:53.299 [2024-05-15 00:57:05.673588] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:09:53.299 [2024-05-15 00:57:05.673607] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:09:53.299 [2024-05-15 00:57:05.673620] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:09:53.299 [2024-05-15 00:57:05.674589] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:53.299 [2024-05-15 00:57:05.674607] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:09:53.299 [2024-05-15 00:57:05.674620] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:09:53.299 [2024-05-15 00:57:05.675597] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:09:53.299 [2024-05-15 00:57:05.675616] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:53.299 [2024-05-15 00:57:05.676605] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:09:53.299 [2024-05-15 00:57:05.676623] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:09:53.299 [2024-05-15 00:57:05.676632] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:09:53.299 [2024-05-15 00:57:05.676644] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:53.299 [2024-05-15 00:57:05.676758] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:09:53.299 [2024-05-15 00:57:05.676767] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:53.299 [2024-05-15 00:57:05.676776] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:09:53.299 [2024-05-15 00:57:05.677616] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:09:53.299 [2024-05-15 00:57:05.678617] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:09:53.299 [2024-05-15 00:57:05.679628] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:53.299 [2024-05-15 00:57:05.680624] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:53.299 [2024-05-15 00:57:05.680736] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:53.299 [2024-05-15 00:57:05.681639] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:09:53.299 [2024-05-15 00:57:05.681657] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:53.299 [2024-05-15 00:57:05.681665] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:09:53.299 [2024-05-15 00:57:05.681690] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:09:53.299 [2024-05-15 00:57:05.681703] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:09:53.299 [2024-05-15 00:57:05.681727] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:53.299 [2024-05-15 00:57:05.681737] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:53.299 [2024-05-15 00:57:05.681756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:53.299 [2024-05-15 00:57:05.681819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:53.300 [2024-05-15 00:57:05.681835] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:09:53.300 [2024-05-15 00:57:05.681843] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:09:53.300 [2024-05-15 00:57:05.681851] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:09:53.300 [2024-05-15 00:57:05.681858] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:53.300 [2024-05-15 00:57:05.681866] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:09:53.300 [2024-05-15 00:57:05.681873] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:09:53.300 [2024-05-15 00:57:05.681881] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:09:53.300 [2024-05-15 00:57:05.681898] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:09:53.300 [2024-05-15 00:57:05.681937] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:53.300 [2024-05-15 00:57:05.681962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:53.300 [2024-05-15 00:57:05.681979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:53.300 [2024-05-15 00:57:05.681993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:53.300 [2024-05-15 00:57:05.682005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:53.300 [2024-05-15 00:57:05.682017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:53.300 [2024-05-15 00:57:05.682026] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:09:53.300 [2024-05-15 00:57:05.682041] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:53.300 [2024-05-15 00:57:05.682057] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:53.300 [2024-05-15 00:57:05.682069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:53.300 [2024-05-15 00:57:05.682080] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:09:53.300 [2024-05-15 00:57:05.682089] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:53.300 [2024-05-15 00:57:05.682099] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:09:53.300 [2024-05-15 00:57:05.682113] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:09:53.300 [2024-05-15 00:57:05.682127] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:53.300 [2024-05-15 00:57:05.682141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:53.300 [2024-05-15 00:57:05.682198] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:09:53.300 [2024-05-15 00:57:05.682214] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:09:53.300 [2024-05-15 00:57:05.682242] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:53.300 [2024-05-15 00:57:05.682251] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:53.300 [2024-05-15 00:57:05.682260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:53.300 [2024-05-15 00:57:05.682276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:53.300 [2024-05-15 00:57:05.682296] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:09:53.300 [2024-05-15 00:57:05.682316] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:09:53.300 [2024-05-15 00:57:05.682329] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:09:53.300 [2024-05-15 00:57:05.682341] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:53.300 [2024-05-15 00:57:05.682353] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:53.300 [2024-05-15 00:57:05.682363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:53.300 [2024-05-15 00:57:05.682389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:53.300 [2024-05-15 00:57:05.682406] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:53.300 [2024-05-15 00:57:05.682419] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:53.300 [2024-05-15 00:57:05.682430] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:53.300 [2024-05-15 00:57:05.682438] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:53.300 [2024-05-15 00:57:05.682448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:53.300 [2024-05-15 00:57:05.682461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:53.300 [2024-05-15 00:57:05.682479] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:53.300 [2024-05-15 00:57:05.682491] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:09:53.300 [2024-05-15 00:57:05.682504] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:09:53.300 [2024-05-15 00:57:05.682515] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:53.300 [2024-05-15 00:57:05.682523] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:09:53.300 [2024-05-15 00:57:05.682531] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:09:53.300 [2024-05-15 00:57:05.682538] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:09:53.300 [2024-05-15 00:57:05.682547] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:09:53.300 [2024-05-15 00:57:05.682577] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:53.300 [2024-05-15 00:57:05.682596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:53.300 [2024-05-15 00:57:05.682615] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:53.300 [2024-05-15 00:57:05.682627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:53.300 [2024-05-15 00:57:05.682643] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:53.300 [2024-05-15 00:57:05.682655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:53.300 [2024-05-15 00:57:05.682671] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:53.300 [2024-05-15 00:57:05.682683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:53.300 [2024-05-15 00:57:05.682704] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:53.300 [2024-05-15 00:57:05.682714] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:53.300 [2024-05-15 00:57:05.682720] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:53.300 [2024-05-15 00:57:05.682726] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:53.300 [2024-05-15 00:57:05.682736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:53.300 [2024-05-15 00:57:05.682748] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:53.300 [2024-05-15 00:57:05.682756] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:53.300 [2024-05-15 00:57:05.682765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:53.300 [2024-05-15 00:57:05.682776] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:53.300 [2024-05-15 00:57:05.682784] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:53.300 [2024-05-15 00:57:05.682792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:53.300 [2024-05-15 00:57:05.682809] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:53.300 [2024-05-15 00:57:05.682818] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:53.300 [2024-05-15 00:57:05.682827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:53.300 [2024-05-15 00:57:05.682839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:53.300 [2024-05-15 00:57:05.682859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:53.300 [2024-05-15 00:57:05.682877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:53.300 [2024-05-15 00:57:05.682892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:53.300 ===================================================== 00:09:53.300 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:53.300 ===================================================== 00:09:53.300 Controller Capabilities/Features 00:09:53.300 ================================ 00:09:53.300 Vendor ID: 4e58 00:09:53.300 Subsystem Vendor ID: 4e58 00:09:53.300 Serial Number: SPDK1 00:09:53.300 Model Number: SPDK bdev Controller 00:09:53.300 Firmware Version: 24.05 00:09:53.300 Recommended Arb Burst: 6 00:09:53.300 IEEE OUI Identifier: 8d 6b 50 00:09:53.300 Multi-path I/O 00:09:53.300 May have multiple subsystem ports: Yes 00:09:53.300 May have multiple controllers: Yes 00:09:53.300 Associated with SR-IOV VF: No 00:09:53.301 Max Data Transfer Size: 131072 00:09:53.301 Max Number of Namespaces: 32 00:09:53.301 Max Number of I/O Queues: 127 00:09:53.301 NVMe Specification Version (VS): 1.3 00:09:53.301 NVMe Specification Version (Identify): 1.3 00:09:53.301 Maximum Queue Entries: 256 00:09:53.301 Contiguous Queues Required: Yes 00:09:53.301 Arbitration Mechanisms Supported 00:09:53.301 Weighted Round Robin: Not Supported 00:09:53.301 Vendor Specific: Not Supported 00:09:53.301 Reset Timeout: 15000 ms 00:09:53.301 Doorbell Stride: 4 bytes 00:09:53.301 NVM Subsystem Reset: Not Supported 00:09:53.301 Command Sets Supported 00:09:53.301 NVM Command Set: Supported 00:09:53.301 Boot Partition: Not Supported 00:09:53.301 Memory Page Size Minimum: 4096 bytes 00:09:53.301 Memory Page Size Maximum: 4096 bytes 00:09:53.301 Persistent Memory Region: Not Supported 00:09:53.301 Optional Asynchronous Events Supported 00:09:53.301 Namespace Attribute Notices: Supported 00:09:53.301 Firmware Activation Notices: Not Supported 00:09:53.301 ANA Change Notices: Not Supported 00:09:53.301 PLE Aggregate Log Change Notices: Not Supported 00:09:53.301 LBA Status Info Alert Notices: Not Supported 00:09:53.301 EGE Aggregate Log Change Notices: Not Supported 00:09:53.301 Normal NVM Subsystem Shutdown event: Not Supported 00:09:53.301 Zone Descriptor Change Notices: Not Supported 00:09:53.301 Discovery Log Change Notices: Not Supported 00:09:53.301 Controller Attributes 00:09:53.301 128-bit Host Identifier: Supported 00:09:53.301 Non-Operational Permissive Mode: Not Supported 00:09:53.301 NVM Sets: Not Supported 00:09:53.301 Read Recovery Levels: Not Supported 00:09:53.301 Endurance Groups: Not Supported 00:09:53.301 Predictable Latency Mode: Not Supported 00:09:53.301 Traffic Based Keep ALive: Not Supported 00:09:53.301 Namespace Granularity: Not Supported 00:09:53.301 SQ Associations: Not Supported 00:09:53.301 UUID List: Not Supported 00:09:53.301 Multi-Domain Subsystem: Not Supported 00:09:53.301 Fixed Capacity Management: Not Supported 00:09:53.301 Variable Capacity Management: Not Supported 00:09:53.301 Delete Endurance Group: Not Supported 00:09:53.301 Delete NVM Set: Not Supported 00:09:53.301 Extended LBA Formats Supported: Not Supported 00:09:53.301 Flexible Data Placement Supported: Not Supported 00:09:53.301 00:09:53.301 Controller Memory Buffer Support 00:09:53.301 ================================ 00:09:53.301 Supported: No 00:09:53.301 00:09:53.301 Persistent Memory Region Support 00:09:53.301 ================================ 00:09:53.301 Supported: No 00:09:53.301 00:09:53.301 Admin Command Set Attributes 00:09:53.301 ============================ 00:09:53.301 Security Send/Receive: Not Supported 00:09:53.301 Format NVM: Not Supported 00:09:53.301 Firmware Activate/Download: Not Supported 00:09:53.301 Namespace Management: Not Supported 00:09:53.301 Device Self-Test: Not Supported 00:09:53.301 Directives: Not Supported 00:09:53.301 NVMe-MI: Not Supported 00:09:53.301 Virtualization Management: Not Supported 00:09:53.301 Doorbell Buffer Config: Not Supported 00:09:53.301 Get LBA Status Capability: Not Supported 00:09:53.301 Command & Feature Lockdown Capability: Not Supported 00:09:53.301 Abort Command Limit: 4 00:09:53.301 Async Event Request Limit: 4 00:09:53.301 Number of Firmware Slots: N/A 00:09:53.301 Firmware Slot 1 Read-Only: N/A 00:09:53.301 Firmware Activation Without Reset: N/A 00:09:53.301 Multiple Update Detection Support: N/A 00:09:53.301 Firmware Update Granularity: No Information Provided 00:09:53.301 Per-Namespace SMART Log: No 00:09:53.301 Asymmetric Namespace Access Log Page: Not Supported 00:09:53.301 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:09:53.301 Command Effects Log Page: Supported 00:09:53.301 Get Log Page Extended Data: Supported 00:09:53.301 Telemetry Log Pages: Not Supported 00:09:53.301 Persistent Event Log Pages: Not Supported 00:09:53.301 Supported Log Pages Log Page: May Support 00:09:53.301 Commands Supported & Effects Log Page: Not Supported 00:09:53.301 Feature Identifiers & Effects Log Page:May Support 00:09:53.301 NVMe-MI Commands & Effects Log Page: May Support 00:09:53.301 Data Area 4 for Telemetry Log: Not Supported 00:09:53.301 Error Log Page Entries Supported: 128 00:09:53.301 Keep Alive: Supported 00:09:53.301 Keep Alive Granularity: 10000 ms 00:09:53.301 00:09:53.301 NVM Command Set Attributes 00:09:53.301 ========================== 00:09:53.301 Submission Queue Entry Size 00:09:53.301 Max: 64 00:09:53.301 Min: 64 00:09:53.301 Completion Queue Entry Size 00:09:53.301 Max: 16 00:09:53.301 Min: 16 00:09:53.301 Number of Namespaces: 32 00:09:53.301 Compare Command: Supported 00:09:53.301 Write Uncorrectable Command: Not Supported 00:09:53.301 Dataset Management Command: Supported 00:09:53.301 Write Zeroes Command: Supported 00:09:53.301 Set Features Save Field: Not Supported 00:09:53.301 Reservations: Not Supported 00:09:53.301 Timestamp: Not Supported 00:09:53.301 Copy: Supported 00:09:53.301 Volatile Write Cache: Present 00:09:53.301 Atomic Write Unit (Normal): 1 00:09:53.301 Atomic Write Unit (PFail): 1 00:09:53.301 Atomic Compare & Write Unit: 1 00:09:53.301 Fused Compare & Write: Supported 00:09:53.301 Scatter-Gather List 00:09:53.301 SGL Command Set: Supported (Dword aligned) 00:09:53.301 SGL Keyed: Not Supported 00:09:53.301 SGL Bit Bucket Descriptor: Not Supported 00:09:53.301 SGL Metadata Pointer: Not Supported 00:09:53.301 Oversized SGL: Not Supported 00:09:53.301 SGL Metadata Address: Not Supported 00:09:53.301 SGL Offset: Not Supported 00:09:53.301 Transport SGL Data Block: Not Supported 00:09:53.301 Replay Protected Memory Block: Not Supported 00:09:53.301 00:09:53.301 Firmware Slot Information 00:09:53.301 ========================= 00:09:53.301 Active slot: 1 00:09:53.301 Slot 1 Firmware Revision: 24.05 00:09:53.301 00:09:53.301 00:09:53.301 Commands Supported and Effects 00:09:53.301 ============================== 00:09:53.301 Admin Commands 00:09:53.301 -------------- 00:09:53.301 Get Log Page (02h): Supported 00:09:53.301 Identify (06h): Supported 00:09:53.301 Abort (08h): Supported 00:09:53.301 Set Features (09h): Supported 00:09:53.301 Get Features (0Ah): Supported 00:09:53.301 Asynchronous Event Request (0Ch): Supported 00:09:53.301 Keep Alive (18h): Supported 00:09:53.301 I/O Commands 00:09:53.301 ------------ 00:09:53.301 Flush (00h): Supported LBA-Change 00:09:53.301 Write (01h): Supported LBA-Change 00:09:53.301 Read (02h): Supported 00:09:53.301 Compare (05h): Supported 00:09:53.301 Write Zeroes (08h): Supported LBA-Change 00:09:53.301 Dataset Management (09h): Supported LBA-Change 00:09:53.301 Copy (19h): Supported LBA-Change 00:09:53.301 Unknown (79h): Supported LBA-Change 00:09:53.301 Unknown (7Ah): Supported 00:09:53.301 00:09:53.301 Error Log 00:09:53.301 ========= 00:09:53.301 00:09:53.301 Arbitration 00:09:53.301 =========== 00:09:53.301 Arbitration Burst: 1 00:09:53.301 00:09:53.301 Power Management 00:09:53.301 ================ 00:09:53.301 Number of Power States: 1 00:09:53.301 Current Power State: Power State #0 00:09:53.301 Power State #0: 00:09:53.301 Max Power: 0.00 W 00:09:53.301 Non-Operational State: Operational 00:09:53.301 Entry Latency: Not Reported 00:09:53.301 Exit Latency: Not Reported 00:09:53.301 Relative Read Throughput: 0 00:09:53.301 Relative Read Latency: 0 00:09:53.301 Relative Write Throughput: 0 00:09:53.301 Relative Write Latency: 0 00:09:53.301 Idle Power: Not Reported 00:09:53.301 Active Power: Not Reported 00:09:53.301 Non-Operational Permissive Mode: Not Supported 00:09:53.301 00:09:53.301 Health Information 00:09:53.301 ================== 00:09:53.301 Critical Warnings: 00:09:53.301 Available Spare Space: OK 00:09:53.301 Temperature: OK 00:09:53.301 Device Reliability: OK 00:09:53.301 Read Only: No 00:09:53.301 Volatile Memory Backup: OK 00:09:53.301 Current Temperature: 0 Kelvin (-2[2024-05-15 00:57:05.683037] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:53.301 [2024-05-15 00:57:05.683055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:53.301 [2024-05-15 00:57:05.683093] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:09:53.301 [2024-05-15 00:57:05.683110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.301 [2024-05-15 00:57:05.683122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.301 [2024-05-15 00:57:05.683132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.301 [2024-05-15 00:57:05.683142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.301 [2024-05-15 00:57:05.683666] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:53.301 [2024-05-15 00:57:05.683686] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:09:53.302 [2024-05-15 00:57:05.684662] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:53.302 [2024-05-15 00:57:05.684744] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:09:53.302 [2024-05-15 00:57:05.684760] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:09:53.302 [2024-05-15 00:57:05.685657] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:09:53.302 [2024-05-15 00:57:05.685680] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:09:53.302 [2024-05-15 00:57:05.685739] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:09:53.559 [2024-05-15 00:57:05.690941] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:53.559 73 Celsius) 00:09:53.559 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:53.559 Available Spare: 0% 00:09:53.559 Available Spare Threshold: 0% 00:09:53.559 Life Percentage Used: 0% 00:09:53.559 Data Units Read: 0 00:09:53.559 Data Units Written: 0 00:09:53.559 Host Read Commands: 0 00:09:53.559 Host Write Commands: 0 00:09:53.559 Controller Busy Time: 0 minutes 00:09:53.559 Power Cycles: 0 00:09:53.559 Power On Hours: 0 hours 00:09:53.559 Unsafe Shutdowns: 0 00:09:53.559 Unrecoverable Media Errors: 0 00:09:53.559 Lifetime Error Log Entries: 0 00:09:53.559 Warning Temperature Time: 0 minutes 00:09:53.559 Critical Temperature Time: 0 minutes 00:09:53.559 00:09:53.559 Number of Queues 00:09:53.559 ================ 00:09:53.559 Number of I/O Submission Queues: 127 00:09:53.559 Number of I/O Completion Queues: 127 00:09:53.559 00:09:53.559 Active Namespaces 00:09:53.559 ================= 00:09:53.559 Namespace ID:1 00:09:53.559 Error Recovery Timeout: Unlimited 00:09:53.559 Command Set Identifier: NVM (00h) 00:09:53.559 Deallocate: Supported 00:09:53.559 Deallocated/Unwritten Error: Not Supported 00:09:53.559 Deallocated Read Value: Unknown 00:09:53.559 Deallocate in Write Zeroes: Not Supported 00:09:53.559 Deallocated Guard Field: 0xFFFF 00:09:53.559 Flush: Supported 00:09:53.559 Reservation: Supported 00:09:53.559 Namespace Sharing Capabilities: Multiple Controllers 00:09:53.559 Size (in LBAs): 131072 (0GiB) 00:09:53.559 Capacity (in LBAs): 131072 (0GiB) 00:09:53.559 Utilization (in LBAs): 131072 (0GiB) 00:09:53.559 NGUID: 3FC929C1BA81446ABE66C62CC1A7B978 00:09:53.559 UUID: 3fc929c1-ba81-446a-be66-c62cc1a7b978 00:09:53.559 Thin Provisioning: Not Supported 00:09:53.559 Per-NS Atomic Units: Yes 00:09:53.559 Atomic Boundary Size (Normal): 0 00:09:53.559 Atomic Boundary Size (PFail): 0 00:09:53.559 Atomic Boundary Offset: 0 00:09:53.559 Maximum Single Source Range Length: 65535 00:09:53.559 Maximum Copy Length: 65535 00:09:53.559 Maximum Source Range Count: 1 00:09:53.559 NGUID/EUI64 Never Reused: No 00:09:53.559 Namespace Write Protected: No 00:09:53.559 Number of LBA Formats: 1 00:09:53.559 Current LBA Format: LBA Format #00 00:09:53.559 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:53.559 00:09:53.559 00:57:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:53.559 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.560 [2024-05-15 00:57:05.920755] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:58.820 Initializing NVMe Controllers 00:09:58.820 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:58.820 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:58.820 Initialization complete. Launching workers. 00:09:58.820 ======================================================== 00:09:58.820 Latency(us) 00:09:58.820 Device Information : IOPS MiB/s Average min max 00:09:58.820 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33795.00 132.01 3787.17 1148.56 8144.70 00:09:58.820 ======================================================== 00:09:58.820 Total : 33795.00 132.01 3787.17 1148.56 8144.70 00:09:58.820 00:09:58.820 [2024-05-15 00:57:10.943391] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:58.820 00:57:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:09:58.820 EAL: No free 2048 kB hugepages reported on node 1 00:09:58.820 [2024-05-15 00:57:11.184594] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:04.096 Initializing NVMe Controllers 00:10:04.096 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:04.096 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:04.096 Initialization complete. Launching workers. 00:10:04.096 ======================================================== 00:10:04.096 Latency(us) 00:10:04.096 Device Information : IOPS MiB/s Average min max 00:10:04.096 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7984.25 6997.55 11972.24 00:10:04.096 ======================================================== 00:10:04.096 Total : 16051.20 62.70 7984.25 6997.55 11972.24 00:10:04.096 00:10:04.096 [2024-05-15 00:57:16.219970] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:04.096 00:57:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:04.096 EAL: No free 2048 kB hugepages reported on node 1 00:10:04.096 [2024-05-15 00:57:16.449140] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:09.406 [2024-05-15 00:57:21.511205] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:09.406 Initializing NVMe Controllers 00:10:09.406 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:09.406 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:09.406 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:10:09.406 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:10:09.406 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:10:09.406 Initialization complete. Launching workers. 00:10:09.406 Starting thread on core 2 00:10:09.406 Starting thread on core 3 00:10:09.406 Starting thread on core 1 00:10:09.406 00:57:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:10:09.406 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.667 [2024-05-15 00:57:21.821415] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:12.959 [2024-05-15 00:57:24.880839] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:12.959 Initializing NVMe Controllers 00:10:12.959 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:12.959 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:12.959 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:10:12.959 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:10:12.959 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:10:12.959 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:10:12.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:12.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:12.959 Initialization complete. Launching workers. 00:10:12.959 Starting thread on core 1 with urgent priority queue 00:10:12.959 Starting thread on core 2 with urgent priority queue 00:10:12.959 Starting thread on core 3 with urgent priority queue 00:10:12.959 Starting thread on core 0 with urgent priority queue 00:10:12.959 SPDK bdev Controller (SPDK1 ) core 0: 2946.33 IO/s 33.94 secs/100000 ios 00:10:12.959 SPDK bdev Controller (SPDK1 ) core 1: 2957.33 IO/s 33.81 secs/100000 ios 00:10:12.959 SPDK bdev Controller (SPDK1 ) core 2: 2802.33 IO/s 35.68 secs/100000 ios 00:10:12.959 SPDK bdev Controller (SPDK1 ) core 3: 3154.67 IO/s 31.70 secs/100000 ios 00:10:12.959 ======================================================== 00:10:12.959 00:10:12.959 00:57:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:12.959 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.959 [2024-05-15 00:57:25.196489] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:12.959 Initializing NVMe Controllers 00:10:12.959 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:12.959 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:12.959 Namespace ID: 1 size: 0GB 00:10:12.959 Initialization complete. 00:10:12.959 INFO: using host memory buffer for IO 00:10:12.959 Hello world! 00:10:12.959 [2024-05-15 00:57:25.231108] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:12.959 00:57:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:12.959 EAL: No free 2048 kB hugepages reported on node 1 00:10:13.218 [2024-05-15 00:57:25.543373] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:14.600 Initializing NVMe Controllers 00:10:14.600 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:14.600 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:14.600 Initialization complete. Launching workers. 00:10:14.600 submit (in ns) avg, min, max = 6072.4, 3513.3, 4016132.2 00:10:14.600 complete (in ns) avg, min, max = 24607.6, 2074.4, 4014590.0 00:10:14.600 00:10:14.600 Submit histogram 00:10:14.600 ================ 00:10:14.600 Range in us Cumulative Count 00:10:14.600 3.508 - 3.532: 0.2951% ( 39) 00:10:14.600 3.532 - 3.556: 1.0216% ( 96) 00:10:14.600 3.556 - 3.579: 3.6701% ( 350) 00:10:14.600 3.579 - 3.603: 8.5963% ( 651) 00:10:14.600 3.603 - 3.627: 16.5040% ( 1045) 00:10:14.600 3.627 - 3.650: 26.1597% ( 1276) 00:10:14.600 3.650 - 3.674: 35.9894% ( 1299) 00:10:14.600 3.674 - 3.698: 43.8365% ( 1037) 00:10:14.600 3.698 - 3.721: 49.8600% ( 796) 00:10:14.600 3.721 - 3.745: 54.0749% ( 557) 00:10:14.600 3.745 - 3.769: 57.9266% ( 509) 00:10:14.600 3.769 - 3.793: 61.3924% ( 458) 00:10:14.600 3.793 - 3.816: 64.4268% ( 401) 00:10:14.600 3.816 - 3.840: 67.8774% ( 456) 00:10:14.600 3.840 - 3.864: 72.1226% ( 561) 00:10:14.600 3.864 - 3.887: 76.5342% ( 583) 00:10:14.600 3.887 - 3.911: 80.5675% ( 533) 00:10:14.600 3.911 - 3.935: 83.8971% ( 440) 00:10:14.600 3.935 - 3.959: 86.0462% ( 284) 00:10:14.600 3.959 - 3.982: 87.6958% ( 218) 00:10:14.600 3.982 - 4.006: 89.1487% ( 192) 00:10:14.600 4.006 - 4.030: 90.2384% ( 144) 00:10:14.600 4.030 - 4.053: 91.1616% ( 122) 00:10:14.600 4.053 - 4.077: 91.9939% ( 110) 00:10:14.600 4.077 - 4.101: 92.9398% ( 125) 00:10:14.600 4.101 - 4.124: 93.8252% ( 117) 00:10:14.600 4.124 - 4.148: 94.7181% ( 118) 00:10:14.600 4.148 - 4.172: 95.3311% ( 81) 00:10:14.600 4.172 - 4.196: 95.7094% ( 50) 00:10:14.600 4.196 - 4.219: 96.0197% ( 41) 00:10:14.600 4.219 - 4.243: 96.2316% ( 28) 00:10:14.600 4.243 - 4.267: 96.4737% ( 32) 00:10:14.600 4.267 - 4.290: 96.6175% ( 19) 00:10:14.600 4.290 - 4.314: 96.7688% ( 20) 00:10:14.600 4.314 - 4.338: 96.9353% ( 22) 00:10:14.600 4.338 - 4.361: 97.0488% ( 15) 00:10:14.600 4.361 - 4.385: 97.1396% ( 12) 00:10:14.600 4.385 - 4.409: 97.2229% ( 11) 00:10:14.600 4.409 - 4.433: 97.2683% ( 6) 00:10:14.600 4.433 - 4.456: 97.3061% ( 5) 00:10:14.600 4.456 - 4.480: 97.3364% ( 4) 00:10:14.600 4.480 - 4.504: 97.4120% ( 10) 00:10:14.600 4.504 - 4.527: 97.4272% ( 2) 00:10:14.600 4.527 - 4.551: 97.4423% ( 2) 00:10:14.600 4.575 - 4.599: 97.4499% ( 1) 00:10:14.600 4.646 - 4.670: 97.4574% ( 1) 00:10:14.600 4.693 - 4.717: 97.4650% ( 1) 00:10:14.600 4.717 - 4.741: 97.4877% ( 3) 00:10:14.600 4.741 - 4.764: 97.5028% ( 2) 00:10:14.600 4.764 - 4.788: 97.5255% ( 3) 00:10:14.600 4.788 - 4.812: 97.5634% ( 5) 00:10:14.600 4.812 - 4.836: 97.5861% ( 3) 00:10:14.600 4.836 - 4.859: 97.6315% ( 6) 00:10:14.600 4.859 - 4.883: 97.6693% ( 5) 00:10:14.600 4.883 - 4.907: 97.7072% ( 5) 00:10:14.600 4.907 - 4.930: 97.7374% ( 4) 00:10:14.600 4.930 - 4.954: 97.7980% ( 8) 00:10:14.600 4.954 - 4.978: 97.8055% ( 1) 00:10:14.600 4.978 - 5.001: 97.8358% ( 4) 00:10:14.600 5.001 - 5.025: 97.8736% ( 5) 00:10:14.600 5.025 - 5.049: 97.9039% ( 4) 00:10:14.600 5.049 - 5.073: 97.9417% ( 5) 00:10:14.600 5.073 - 5.096: 97.9796% ( 5) 00:10:14.600 5.096 - 5.120: 97.9947% ( 2) 00:10:14.600 5.120 - 5.144: 98.0325% ( 5) 00:10:14.600 5.144 - 5.167: 98.0628% ( 4) 00:10:14.600 5.167 - 5.191: 98.1158% ( 7) 00:10:14.600 5.191 - 5.215: 98.1460% ( 4) 00:10:14.600 5.215 - 5.239: 98.1687% ( 3) 00:10:14.600 5.239 - 5.262: 98.1914% ( 3) 00:10:14.600 5.262 - 5.286: 98.1990% ( 1) 00:10:14.600 5.286 - 5.310: 98.2142% ( 2) 00:10:14.600 5.310 - 5.333: 98.2369% ( 3) 00:10:14.600 5.333 - 5.357: 98.2596% ( 3) 00:10:14.600 5.404 - 5.428: 98.2671% ( 1) 00:10:14.600 5.452 - 5.476: 98.2823% ( 2) 00:10:14.600 5.476 - 5.499: 98.2974% ( 2) 00:10:14.600 5.523 - 5.547: 98.3201% ( 3) 00:10:14.600 5.547 - 5.570: 98.3277% ( 1) 00:10:14.600 5.570 - 5.594: 98.3352% ( 1) 00:10:14.600 5.594 - 5.618: 98.3428% ( 1) 00:10:14.600 5.618 - 5.641: 98.3504% ( 1) 00:10:14.600 5.641 - 5.665: 98.3579% ( 1) 00:10:14.600 5.665 - 5.689: 98.3655% ( 1) 00:10:14.600 5.689 - 5.713: 98.3731% ( 1) 00:10:14.600 5.784 - 5.807: 98.3806% ( 1) 00:10:14.600 5.831 - 5.855: 98.3882% ( 1) 00:10:14.600 5.855 - 5.879: 98.3958% ( 1) 00:10:14.600 5.926 - 5.950: 98.4033% ( 1) 00:10:14.600 5.950 - 5.973: 98.4109% ( 1) 00:10:14.600 6.116 - 6.163: 98.4185% ( 1) 00:10:14.600 6.163 - 6.210: 98.4260% ( 1) 00:10:14.600 6.210 - 6.258: 98.4336% ( 1) 00:10:14.600 6.400 - 6.447: 98.4412% ( 1) 00:10:14.600 6.542 - 6.590: 98.4487% ( 1) 00:10:14.600 6.684 - 6.732: 98.4563% ( 1) 00:10:14.600 6.779 - 6.827: 98.4639% ( 1) 00:10:14.600 6.827 - 6.874: 98.4714% ( 1) 00:10:14.600 6.874 - 6.921: 98.4790% ( 1) 00:10:14.600 6.969 - 7.016: 98.4941% ( 2) 00:10:14.600 7.016 - 7.064: 98.5017% ( 1) 00:10:14.600 7.206 - 7.253: 98.5168% ( 2) 00:10:14.600 7.301 - 7.348: 98.5244% ( 1) 00:10:14.600 7.348 - 7.396: 98.5320% ( 1) 00:10:14.600 7.443 - 7.490: 98.5471% ( 2) 00:10:14.600 7.490 - 7.538: 98.5547% ( 1) 00:10:14.600 7.585 - 7.633: 98.5698% ( 2) 00:10:14.600 7.633 - 7.680: 98.5774% ( 1) 00:10:14.600 7.727 - 7.775: 98.5925% ( 2) 00:10:14.600 7.775 - 7.822: 98.6076% ( 2) 00:10:14.600 7.917 - 7.964: 98.6228% ( 2) 00:10:14.600 8.012 - 8.059: 98.6379% ( 2) 00:10:14.600 8.059 - 8.107: 98.6530% ( 2) 00:10:14.600 8.201 - 8.249: 98.6682% ( 2) 00:10:14.600 8.249 - 8.296: 98.6833% ( 2) 00:10:14.600 8.344 - 8.391: 98.6984% ( 2) 00:10:14.600 8.391 - 8.439: 98.7136% ( 2) 00:10:14.600 8.439 - 8.486: 98.7212% ( 1) 00:10:14.600 8.486 - 8.533: 98.7363% ( 2) 00:10:14.600 8.533 - 8.581: 98.7514% ( 2) 00:10:14.600 8.628 - 8.676: 98.7590% ( 1) 00:10:14.600 8.676 - 8.723: 98.7666% ( 1) 00:10:14.600 9.055 - 9.102: 98.7741% ( 1) 00:10:14.600 9.244 - 9.292: 98.7893% ( 2) 00:10:14.600 9.339 - 9.387: 98.7968% ( 1) 00:10:14.600 9.481 - 9.529: 98.8044% ( 1) 00:10:14.600 9.529 - 9.576: 98.8120% ( 1) 00:10:14.600 9.671 - 9.719: 98.8195% ( 1) 00:10:14.600 9.719 - 9.766: 98.8271% ( 1) 00:10:14.600 9.956 - 10.003: 98.8347% ( 1) 00:10:14.600 10.003 - 10.050: 98.8422% ( 1) 00:10:14.600 10.098 - 10.145: 98.8498% ( 1) 00:10:14.600 10.193 - 10.240: 98.8574% ( 1) 00:10:14.600 10.809 - 10.856: 98.8649% ( 1) 00:10:14.600 11.378 - 11.425: 98.8725% ( 1) 00:10:14.600 11.425 - 11.473: 98.8801% ( 1) 00:10:14.600 11.662 - 11.710: 98.8876% ( 1) 00:10:14.600 11.710 - 11.757: 98.8952% ( 1) 00:10:14.600 12.231 - 12.326: 98.9028% ( 1) 00:10:14.600 12.516 - 12.610: 98.9103% ( 1) 00:10:14.600 12.610 - 12.705: 98.9179% ( 1) 00:10:14.600 12.990 - 13.084: 98.9255% ( 1) 00:10:14.600 13.179 - 13.274: 98.9330% ( 1) 00:10:14.600 13.369 - 13.464: 98.9406% ( 1) 00:10:14.600 13.559 - 13.653: 98.9482% ( 1) 00:10:14.600 13.653 - 13.748: 98.9557% ( 1) 00:10:14.600 13.843 - 13.938: 98.9633% ( 1) 00:10:14.600 13.938 - 14.033: 98.9709% ( 1) 00:10:14.600 14.033 - 14.127: 98.9784% ( 1) 00:10:14.600 14.601 - 14.696: 98.9936% ( 2) 00:10:14.600 16.972 - 17.067: 99.0087% ( 2) 00:10:14.600 17.161 - 17.256: 99.0314% ( 3) 00:10:14.600 17.256 - 17.351: 99.0390% ( 1) 00:10:14.600 17.351 - 17.446: 99.0541% ( 2) 00:10:14.600 17.446 - 17.541: 99.1071% ( 7) 00:10:14.600 17.541 - 17.636: 99.1222% ( 2) 00:10:14.600 17.636 - 17.730: 99.1449% ( 3) 00:10:14.600 17.730 - 17.825: 99.2130% ( 9) 00:10:14.600 17.825 - 17.920: 99.2433% ( 4) 00:10:14.600 17.920 - 18.015: 99.3190% ( 10) 00:10:14.600 18.015 - 18.110: 99.3795% ( 8) 00:10:14.600 18.110 - 18.204: 99.4703% ( 12) 00:10:14.600 18.204 - 18.299: 99.5308% ( 8) 00:10:14.600 18.299 - 18.394: 99.5989% ( 9) 00:10:14.600 18.394 - 18.489: 99.6292% ( 4) 00:10:14.600 18.489 - 18.584: 99.6746% ( 6) 00:10:14.600 18.584 - 18.679: 99.7124% ( 5) 00:10:14.600 18.679 - 18.773: 99.7579% ( 6) 00:10:14.600 18.773 - 18.868: 99.8033% ( 6) 00:10:14.600 18.868 - 18.963: 99.8184% ( 2) 00:10:14.600 18.963 - 19.058: 99.8487% ( 4) 00:10:14.600 19.058 - 19.153: 99.8562% ( 1) 00:10:14.600 19.153 - 19.247: 99.8638% ( 1) 00:10:14.600 19.342 - 19.437: 99.8789% ( 2) 00:10:14.600 19.437 - 19.532: 99.8941% ( 2) 00:10:14.600 20.101 - 20.196: 99.9016% ( 1) 00:10:14.600 20.290 - 20.385: 99.9092% ( 1) 00:10:14.600 21.902 - 21.997: 99.9168% ( 1) 00:10:14.600 22.471 - 22.566: 99.9243% ( 1) 00:10:14.600 23.324 - 23.419: 99.9319% ( 1) 00:10:14.600 25.410 - 25.600: 99.9395% ( 1) 00:10:14.600 28.824 - 29.013: 99.9470% ( 1) 00:10:14.600 3980.705 - 4004.978: 99.9849% ( 5) 00:10:14.600 4004.978 - 4029.250: 100.0000% ( 2) 00:10:14.600 00:10:14.600 Complete histogram 00:10:14.600 ================== 00:10:14.600 Range in us Cumulative Count 00:10:14.600 2.074 - 2.086: 10.2081% ( 1349) 00:10:14.600 2.086 - 2.098: 46.4170% ( 4785) 00:10:14.600 2.098 - 2.110: 50.6167% ( 555) 00:10:14.600 2.110 - 2.121: 53.5376% ( 386) 00:10:14.600 2.121 - 2.133: 57.8434% ( 569) 00:10:14.600 2.133 - 2.145: 58.7968% ( 126) 00:10:14.600 2.145 - 2.157: 67.0299% ( 1088) 00:10:14.600 2.157 - 2.169: 75.8305% ( 1163) 00:10:14.600 2.169 - 2.181: 76.8294% ( 132) 00:10:14.600 2.181 - 2.193: 78.4033% ( 208) 00:10:14.600 2.193 - 2.204: 80.3405% ( 256) 00:10:14.600 2.204 - 2.216: 80.7946% ( 60) 00:10:14.600 2.216 - 2.228: 84.1998% ( 450) 00:10:14.600 2.228 - 2.240: 89.5119% ( 702) 00:10:14.600 2.240 - 2.252: 91.4415% ( 255) 00:10:14.600 2.252 - 2.264: 92.2058% ( 101) 00:10:14.600 2.264 - 2.276: 92.9625% ( 100) 00:10:14.600 2.276 - 2.287: 93.2879% ( 43) 00:10:14.600 2.287 - 2.299: 93.7117% ( 56) 00:10:14.600 2.299 - 2.311: 94.4835% ( 102) 00:10:14.600 2.311 - 2.323: 95.1495% ( 88) 00:10:14.600 2.323 - 2.335: 95.2781% ( 17) 00:10:14.600 2.335 - 2.347: 95.3386% ( 8) 00:10:14.600 2.347 - 2.359: 95.4446% ( 14) 00:10:14.600 2.359 - 2.370: 95.5278% ( 11) 00:10:14.600 2.370 - 2.382: 95.7775% ( 33) 00:10:14.600 2.382 - 2.394: 96.1407% ( 48) 00:10:14.600 2.394 - 2.406: 96.5569% ( 55) 00:10:14.600 2.406 - 2.418: 96.8748% ( 42) 00:10:14.600 2.418 - 2.430: 97.0715% ( 26) 00:10:14.600 2.430 - 2.441: 97.3212% ( 33) 00:10:14.600 2.441 - 2.453: 97.4423% ( 16) 00:10:14.600 2.453 - 2.465: 97.6012% ( 21) 00:10:14.600 2.465 - 2.477: 97.7299% ( 17) 00:10:14.600 2.477 - 2.489: 97.8736% ( 19) 00:10:14.600 2.489 - 2.501: 97.9947% ( 16) 00:10:14.600 2.501 - 2.513: 98.1082% ( 15) 00:10:14.600 2.513 - 2.524: 98.1687% ( 8) 00:10:14.600 2.524 - 2.536: 98.2520% ( 11) 00:10:14.600 2.536 - 2.548: 98.3125% ( 8) 00:10:14.600 2.548 - 2.560: 98.3579% ( 6) 00:10:14.600 2.560 - 2.572: 98.3731% ( 2) 00:10:14.600 2.572 - 2.584: 98.3882% ( 2) 00:10:14.600 2.607 - 2.619: 98.3958% ( 1) 00:10:14.600 2.619 - 2.631: 98.4109% ( 2) 00:10:14.600 2.631 - 2.643: 98.4185% ( 1) 00:10:14.600 2.643 - 2.655: 98.4336% ( 2) 00:10:14.600 2.702 - 2.714: 98.4412% ( 1) 00:10:14.600 2.714 - 2.726: 98.4639% ( 3) 00:10:14.600 2.750 - 2.761: 98.4714% ( 1) 00:10:14.600 2.773 - 2.785: 98.4790% ( 1) 00:10:14.600 2.797 - 2.809: 98.4866% ( 1) 00:10:14.600 2.833 - 2.844: 98.4941% ( 1) 00:10:14.600 2.844 - 2.856: 98.5093% ( 2) 00:10:14.600 2.880 - 2.892: 98.5168% ( 1) 00:10:14.600 2.904 - 2.916: 98.5244% ( 1) 00:10:14.600 2.999 - 3.010: 98.5320% ( 1) 00:10:14.600 3.034 - 3.058: 98.5395% ( 1) 00:10:14.600 3.081 - 3.105: 98.5547% ( 2) 00:10:14.600 3.129 - 3.153: 98.5622% ( 1) 00:10:14.600 3.153 - 3.176: 98.5774% ( 2) 00:10:14.600 3.176 - 3.200: 98.5925% ( 2) 00:10:14.600 3.200 - 3.224: 98.6152% ( 3) 00:10:14.600 3.224 - 3.247: 98.6228% ( 1) 00:10:14.600 3.247 - 3.271: 98.6303% ( 1) 00:10:14.600 3.295 - 3.319: 98.6455% ( 2) 00:10:14.600 3.319 - 3.342: 98.6606% ( 2) 00:10:14.600 3.366 - 3.390: 9[2024-05-15 00:57:26.565598] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:14.600 8.6757% ( 2) 00:10:14.600 3.390 - 3.413: 98.6909% ( 2) 00:10:14.600 3.413 - 3.437: 98.7060% ( 2) 00:10:14.600 3.437 - 3.461: 98.7212% ( 2) 00:10:14.600 3.461 - 3.484: 98.7363% ( 2) 00:10:14.600 3.484 - 3.508: 98.7514% ( 2) 00:10:14.600 3.556 - 3.579: 98.7741% ( 3) 00:10:14.600 3.603 - 3.627: 98.7817% ( 1) 00:10:14.600 3.627 - 3.650: 98.7893% ( 1) 00:10:14.600 3.650 - 3.674: 98.8044% ( 2) 00:10:14.600 3.674 - 3.698: 98.8120% ( 1) 00:10:14.600 3.698 - 3.721: 98.8195% ( 1) 00:10:14.600 3.745 - 3.769: 98.8271% ( 1) 00:10:14.600 4.196 - 4.219: 98.8347% ( 1) 00:10:14.601 5.073 - 5.096: 98.8422% ( 1) 00:10:14.601 5.144 - 5.167: 98.8498% ( 1) 00:10:14.601 5.523 - 5.547: 98.8574% ( 1) 00:10:14.601 5.618 - 5.641: 98.8649% ( 1) 00:10:14.601 5.713 - 5.736: 98.8725% ( 1) 00:10:14.601 5.902 - 5.926: 98.8801% ( 1) 00:10:14.601 6.258 - 6.305: 98.8876% ( 1) 00:10:14.601 6.779 - 6.827: 98.8952% ( 1) 00:10:14.601 6.921 - 6.969: 98.9028% ( 1) 00:10:14.601 7.396 - 7.443: 98.9103% ( 1) 00:10:14.601 7.585 - 7.633: 98.9179% ( 1) 00:10:14.601 15.455 - 15.550: 98.9255% ( 1) 00:10:14.601 15.550 - 15.644: 98.9406% ( 2) 00:10:14.601 15.644 - 15.739: 98.9482% ( 1) 00:10:14.601 15.739 - 15.834: 98.9709% ( 3) 00:10:14.601 15.834 - 15.929: 98.9936% ( 3) 00:10:14.601 15.929 - 16.024: 99.0011% ( 1) 00:10:14.601 16.024 - 16.119: 99.0163% ( 2) 00:10:14.601 16.119 - 16.213: 99.0465% ( 4) 00:10:14.601 16.213 - 16.308: 99.0768% ( 4) 00:10:14.601 16.308 - 16.403: 99.1146% ( 5) 00:10:14.601 16.403 - 16.498: 99.1373% ( 3) 00:10:14.601 16.498 - 16.593: 99.1827% ( 6) 00:10:14.601 16.593 - 16.687: 99.2433% ( 8) 00:10:14.601 16.687 - 16.782: 99.2736% ( 4) 00:10:14.601 16.782 - 16.877: 99.2963% ( 3) 00:10:14.601 16.877 - 16.972: 99.3417% ( 6) 00:10:14.601 16.972 - 17.067: 99.3492% ( 1) 00:10:14.601 17.067 - 17.161: 99.3568% ( 1) 00:10:14.601 17.161 - 17.256: 99.3719% ( 2) 00:10:14.601 17.256 - 17.351: 99.3946% ( 3) 00:10:14.601 17.351 - 17.446: 99.4022% ( 1) 00:10:14.601 17.446 - 17.541: 99.4098% ( 1) 00:10:14.601 17.541 - 17.636: 99.4173% ( 1) 00:10:14.601 17.636 - 17.730: 99.4249% ( 1) 00:10:14.601 17.730 - 17.825: 99.4325% ( 1) 00:10:14.601 18.204 - 18.299: 99.4400% ( 1) 00:10:14.601 3835.070 - 3859.342: 99.4476% ( 1) 00:10:14.601 3980.705 - 4004.978: 99.9319% ( 64) 00:10:14.601 4004.978 - 4029.250: 100.0000% ( 9) 00:10:14.601 00:10:14.601 00:57:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:10:14.601 00:57:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:14.601 00:57:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:10:14.601 00:57:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:10:14.601 00:57:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:14.601 [ 00:10:14.601 { 00:10:14.601 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:14.601 "subtype": "Discovery", 00:10:14.601 "listen_addresses": [], 00:10:14.601 "allow_any_host": true, 00:10:14.601 "hosts": [] 00:10:14.601 }, 00:10:14.601 { 00:10:14.601 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:14.601 "subtype": "NVMe", 00:10:14.601 "listen_addresses": [ 00:10:14.601 { 00:10:14.601 "trtype": "VFIOUSER", 00:10:14.601 "adrfam": "IPv4", 00:10:14.601 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:14.601 "trsvcid": "0" 00:10:14.601 } 00:10:14.601 ], 00:10:14.601 "allow_any_host": true, 00:10:14.601 "hosts": [], 00:10:14.601 "serial_number": "SPDK1", 00:10:14.601 "model_number": "SPDK bdev Controller", 00:10:14.601 "max_namespaces": 32, 00:10:14.601 "min_cntlid": 1, 00:10:14.601 "max_cntlid": 65519, 00:10:14.601 "namespaces": [ 00:10:14.601 { 00:10:14.601 "nsid": 1, 00:10:14.601 "bdev_name": "Malloc1", 00:10:14.601 "name": "Malloc1", 00:10:14.601 "nguid": "3FC929C1BA81446ABE66C62CC1A7B978", 00:10:14.601 "uuid": "3fc929c1-ba81-446a-be66-c62cc1a7b978" 00:10:14.601 } 00:10:14.601 ] 00:10:14.601 }, 00:10:14.601 { 00:10:14.601 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:14.601 "subtype": "NVMe", 00:10:14.601 "listen_addresses": [ 00:10:14.601 { 00:10:14.601 "trtype": "VFIOUSER", 00:10:14.601 "adrfam": "IPv4", 00:10:14.601 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:14.601 "trsvcid": "0" 00:10:14.601 } 00:10:14.601 ], 00:10:14.601 "allow_any_host": true, 00:10:14.601 "hosts": [], 00:10:14.601 "serial_number": "SPDK2", 00:10:14.601 "model_number": "SPDK bdev Controller", 00:10:14.601 "max_namespaces": 32, 00:10:14.601 "min_cntlid": 1, 00:10:14.601 "max_cntlid": 65519, 00:10:14.601 "namespaces": [ 00:10:14.601 { 00:10:14.601 "nsid": 1, 00:10:14.601 "bdev_name": "Malloc2", 00:10:14.601 "name": "Malloc2", 00:10:14.601 "nguid": "8B75FDEE5681472BB069D22194375716", 00:10:14.601 "uuid": "8b75fdee-5681-472b-b069-d22194375716" 00:10:14.601 } 00:10:14.601 ] 00:10:14.601 } 00:10:14.601 ] 00:10:14.601 00:57:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:14.601 00:57:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1200898 00:10:14.601 00:57:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:10:14.601 00:57:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:14.601 00:57:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:10:14.601 00:57:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:14.601 00:57:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:14.601 00:57:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:10:14.601 00:57:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:14.601 00:57:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:10:14.601 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.859 [2024-05-15 00:57:27.078505] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:14.859 Malloc3 00:10:14.859 00:57:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:10:15.117 [2024-05-15 00:57:27.398868] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:15.117 00:57:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:15.117 Asynchronous Event Request test 00:10:15.117 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:15.117 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:15.117 Registering asynchronous event callbacks... 00:10:15.117 Starting namespace attribute notice tests for all controllers... 00:10:15.117 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:15.117 aer_cb - Changed Namespace 00:10:15.117 Cleaning up... 00:10:15.415 [ 00:10:15.415 { 00:10:15.415 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:15.415 "subtype": "Discovery", 00:10:15.415 "listen_addresses": [], 00:10:15.415 "allow_any_host": true, 00:10:15.415 "hosts": [] 00:10:15.415 }, 00:10:15.415 { 00:10:15.415 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:15.415 "subtype": "NVMe", 00:10:15.415 "listen_addresses": [ 00:10:15.415 { 00:10:15.415 "trtype": "VFIOUSER", 00:10:15.415 "adrfam": "IPv4", 00:10:15.415 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:15.415 "trsvcid": "0" 00:10:15.415 } 00:10:15.415 ], 00:10:15.415 "allow_any_host": true, 00:10:15.415 "hosts": [], 00:10:15.415 "serial_number": "SPDK1", 00:10:15.415 "model_number": "SPDK bdev Controller", 00:10:15.415 "max_namespaces": 32, 00:10:15.415 "min_cntlid": 1, 00:10:15.415 "max_cntlid": 65519, 00:10:15.415 "namespaces": [ 00:10:15.415 { 00:10:15.415 "nsid": 1, 00:10:15.415 "bdev_name": "Malloc1", 00:10:15.415 "name": "Malloc1", 00:10:15.415 "nguid": "3FC929C1BA81446ABE66C62CC1A7B978", 00:10:15.415 "uuid": "3fc929c1-ba81-446a-be66-c62cc1a7b978" 00:10:15.415 }, 00:10:15.415 { 00:10:15.415 "nsid": 2, 00:10:15.415 "bdev_name": "Malloc3", 00:10:15.415 "name": "Malloc3", 00:10:15.415 "nguid": "E9E94AB53E3B48CC8057400DA700D55C", 00:10:15.415 "uuid": "e9e94ab5-3e3b-48cc-8057-400da700d55c" 00:10:15.415 } 00:10:15.415 ] 00:10:15.415 }, 00:10:15.415 { 00:10:15.415 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:15.415 "subtype": "NVMe", 00:10:15.415 "listen_addresses": [ 00:10:15.415 { 00:10:15.415 "trtype": "VFIOUSER", 00:10:15.415 "adrfam": "IPv4", 00:10:15.415 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:15.415 "trsvcid": "0" 00:10:15.415 } 00:10:15.415 ], 00:10:15.415 "allow_any_host": true, 00:10:15.415 "hosts": [], 00:10:15.415 "serial_number": "SPDK2", 00:10:15.415 "model_number": "SPDK bdev Controller", 00:10:15.415 "max_namespaces": 32, 00:10:15.415 "min_cntlid": 1, 00:10:15.415 "max_cntlid": 65519, 00:10:15.415 "namespaces": [ 00:10:15.415 { 00:10:15.415 "nsid": 1, 00:10:15.415 "bdev_name": "Malloc2", 00:10:15.415 "name": "Malloc2", 00:10:15.415 "nguid": "8B75FDEE5681472BB069D22194375716", 00:10:15.415 "uuid": "8b75fdee-5681-472b-b069-d22194375716" 00:10:15.415 } 00:10:15.415 ] 00:10:15.415 } 00:10:15.415 ] 00:10:15.415 00:57:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1200898 00:10:15.415 00:57:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:15.415 00:57:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:15.415 00:57:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:10:15.415 00:57:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:15.415 [2024-05-15 00:57:27.674250] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:10:15.415 [2024-05-15 00:57:27.674296] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1200988 ] 00:10:15.415 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.415 [2024-05-15 00:57:27.708600] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:10:15.415 [2024-05-15 00:57:27.717329] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:15.415 [2024-05-15 00:57:27.717360] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8e148f7000 00:10:15.415 [2024-05-15 00:57:27.718324] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:15.415 [2024-05-15 00:57:27.719331] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:15.415 [2024-05-15 00:57:27.720345] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:15.415 [2024-05-15 00:57:27.721350] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:15.415 [2024-05-15 00:57:27.722360] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:15.415 [2024-05-15 00:57:27.723363] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:15.415 [2024-05-15 00:57:27.724374] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:15.415 [2024-05-15 00:57:27.725377] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:15.415 [2024-05-15 00:57:27.726384] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:15.415 [2024-05-15 00:57:27.726409] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8e148ec000 00:10:15.415 [2024-05-15 00:57:27.727525] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:15.415 [2024-05-15 00:57:27.742763] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:10:15.415 [2024-05-15 00:57:27.742810] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:10:15.415 [2024-05-15 00:57:27.747910] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:15.415 [2024-05-15 00:57:27.747984] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:15.415 [2024-05-15 00:57:27.748081] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:10:15.415 [2024-05-15 00:57:27.748107] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:10:15.415 [2024-05-15 00:57:27.748118] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:10:15.415 [2024-05-15 00:57:27.748909] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:10:15.415 [2024-05-15 00:57:27.748950] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:10:15.415 [2024-05-15 00:57:27.748965] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:10:15.415 [2024-05-15 00:57:27.749916] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:15.415 [2024-05-15 00:57:27.749954] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:10:15.415 [2024-05-15 00:57:27.749969] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:10:15.415 [2024-05-15 00:57:27.750923] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:10:15.415 [2024-05-15 00:57:27.750962] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:15.415 [2024-05-15 00:57:27.751933] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:10:15.415 [2024-05-15 00:57:27.751968] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:10:15.415 [2024-05-15 00:57:27.751977] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:10:15.415 [2024-05-15 00:57:27.751989] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:15.415 [2024-05-15 00:57:27.752098] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:10:15.415 [2024-05-15 00:57:27.752106] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:15.415 [2024-05-15 00:57:27.752115] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:10:15.415 [2024-05-15 00:57:27.752956] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:10:15.415 [2024-05-15 00:57:27.753960] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:10:15.415 [2024-05-15 00:57:27.754962] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:15.415 [2024-05-15 00:57:27.755960] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:15.415 [2024-05-15 00:57:27.756051] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:15.415 [2024-05-15 00:57:27.756976] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:10:15.415 [2024-05-15 00:57:27.756997] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:15.415 [2024-05-15 00:57:27.757007] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:10:15.415 [2024-05-15 00:57:27.757031] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:10:15.416 [2024-05-15 00:57:27.757052] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:10:15.416 [2024-05-15 00:57:27.757077] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:15.416 [2024-05-15 00:57:27.757087] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:15.416 [2024-05-15 00:57:27.757105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:15.416 [2024-05-15 00:57:27.764155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:15.416 [2024-05-15 00:57:27.764179] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:10:15.416 [2024-05-15 00:57:27.764192] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:10:15.416 [2024-05-15 00:57:27.764201] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:10:15.416 [2024-05-15 00:57:27.764223] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:15.416 [2024-05-15 00:57:27.764231] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:10:15.416 [2024-05-15 00:57:27.764239] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:10:15.416 [2024-05-15 00:57:27.764247] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:10:15.416 [2024-05-15 00:57:27.764266] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:10:15.416 [2024-05-15 00:57:27.764299] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:15.416 [2024-05-15 00:57:27.772942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:15.416 [2024-05-15 00:57:27.772966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:15.416 [2024-05-15 00:57:27.772979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:15.416 [2024-05-15 00:57:27.772991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:15.416 [2024-05-15 00:57:27.773003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:15.416 [2024-05-15 00:57:27.773012] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:10:15.416 [2024-05-15 00:57:27.773028] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:15.416 [2024-05-15 00:57:27.773043] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:15.416 [2024-05-15 00:57:27.780956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:15.416 [2024-05-15 00:57:27.780975] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:10:15.416 [2024-05-15 00:57:27.780984] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:15.416 [2024-05-15 00:57:27.780996] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:10:15.416 [2024-05-15 00:57:27.781010] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:10:15.416 [2024-05-15 00:57:27.781025] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:15.416 [2024-05-15 00:57:27.788941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:15.416 [2024-05-15 00:57:27.789005] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:10:15.416 [2024-05-15 00:57:27.789022] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:10:15.416 [2024-05-15 00:57:27.789040] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:15.416 [2024-05-15 00:57:27.789050] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:15.416 [2024-05-15 00:57:27.789060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:15.416 [2024-05-15 00:57:27.796952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:15.416 [2024-05-15 00:57:27.796982] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:10:15.416 [2024-05-15 00:57:27.796999] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:10:15.416 [2024-05-15 00:57:27.797013] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:10:15.416 [2024-05-15 00:57:27.797026] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:15.416 [2024-05-15 00:57:27.797034] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:15.416 [2024-05-15 00:57:27.797044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:15.676 [2024-05-15 00:57:27.804957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:15.676 [2024-05-15 00:57:27.804984] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:15.676 [2024-05-15 00:57:27.804999] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:15.676 [2024-05-15 00:57:27.805028] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:15.676 [2024-05-15 00:57:27.805037] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:15.676 [2024-05-15 00:57:27.805048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:15.676 [2024-05-15 00:57:27.812959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:15.676 [2024-05-15 00:57:27.812990] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:15.676 [2024-05-15 00:57:27.813004] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:10:15.676 [2024-05-15 00:57:27.813018] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:10:15.676 [2024-05-15 00:57:27.813028] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:15.676 [2024-05-15 00:57:27.813036] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:10:15.676 [2024-05-15 00:57:27.813045] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:10:15.676 [2024-05-15 00:57:27.813052] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:10:15.676 [2024-05-15 00:57:27.813061] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:10:15.676 [2024-05-15 00:57:27.813092] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:15.676 [2024-05-15 00:57:27.820953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:15.676 [2024-05-15 00:57:27.820980] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:15.676 [2024-05-15 00:57:27.828943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:15.677 [2024-05-15 00:57:27.828969] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:15.677 [2024-05-15 00:57:27.836942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:15.677 [2024-05-15 00:57:27.836967] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:15.677 [2024-05-15 00:57:27.844939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:15.677 [2024-05-15 00:57:27.844975] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:15.677 [2024-05-15 00:57:27.844986] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:15.677 [2024-05-15 00:57:27.844992] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:15.677 [2024-05-15 00:57:27.844998] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:15.677 [2024-05-15 00:57:27.845008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:15.677 [2024-05-15 00:57:27.845020] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:15.677 [2024-05-15 00:57:27.845028] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:15.677 [2024-05-15 00:57:27.845037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:15.677 [2024-05-15 00:57:27.845048] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:15.677 [2024-05-15 00:57:27.845056] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:15.677 [2024-05-15 00:57:27.845065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:15.677 [2024-05-15 00:57:27.845082] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:15.677 [2024-05-15 00:57:27.845091] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:15.677 [2024-05-15 00:57:27.845100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:15.677 [2024-05-15 00:57:27.852940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:15.677 [2024-05-15 00:57:27.852971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:15.677 [2024-05-15 00:57:27.852988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:15.677 [2024-05-15 00:57:27.853004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:15.677 ===================================================== 00:10:15.677 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:15.677 ===================================================== 00:10:15.677 Controller Capabilities/Features 00:10:15.677 ================================ 00:10:15.677 Vendor ID: 4e58 00:10:15.677 Subsystem Vendor ID: 4e58 00:10:15.677 Serial Number: SPDK2 00:10:15.677 Model Number: SPDK bdev Controller 00:10:15.677 Firmware Version: 24.05 00:10:15.677 Recommended Arb Burst: 6 00:10:15.677 IEEE OUI Identifier: 8d 6b 50 00:10:15.677 Multi-path I/O 00:10:15.677 May have multiple subsystem ports: Yes 00:10:15.677 May have multiple controllers: Yes 00:10:15.677 Associated with SR-IOV VF: No 00:10:15.677 Max Data Transfer Size: 131072 00:10:15.677 Max Number of Namespaces: 32 00:10:15.677 Max Number of I/O Queues: 127 00:10:15.677 NVMe Specification Version (VS): 1.3 00:10:15.677 NVMe Specification Version (Identify): 1.3 00:10:15.677 Maximum Queue Entries: 256 00:10:15.677 Contiguous Queues Required: Yes 00:10:15.677 Arbitration Mechanisms Supported 00:10:15.677 Weighted Round Robin: Not Supported 00:10:15.677 Vendor Specific: Not Supported 00:10:15.677 Reset Timeout: 15000 ms 00:10:15.677 Doorbell Stride: 4 bytes 00:10:15.677 NVM Subsystem Reset: Not Supported 00:10:15.677 Command Sets Supported 00:10:15.677 NVM Command Set: Supported 00:10:15.677 Boot Partition: Not Supported 00:10:15.677 Memory Page Size Minimum: 4096 bytes 00:10:15.677 Memory Page Size Maximum: 4096 bytes 00:10:15.677 Persistent Memory Region: Not Supported 00:10:15.677 Optional Asynchronous Events Supported 00:10:15.677 Namespace Attribute Notices: Supported 00:10:15.677 Firmware Activation Notices: Not Supported 00:10:15.677 ANA Change Notices: Not Supported 00:10:15.677 PLE Aggregate Log Change Notices: Not Supported 00:10:15.677 LBA Status Info Alert Notices: Not Supported 00:10:15.677 EGE Aggregate Log Change Notices: Not Supported 00:10:15.677 Normal NVM Subsystem Shutdown event: Not Supported 00:10:15.677 Zone Descriptor Change Notices: Not Supported 00:10:15.677 Discovery Log Change Notices: Not Supported 00:10:15.677 Controller Attributes 00:10:15.677 128-bit Host Identifier: Supported 00:10:15.677 Non-Operational Permissive Mode: Not Supported 00:10:15.677 NVM Sets: Not Supported 00:10:15.677 Read Recovery Levels: Not Supported 00:10:15.677 Endurance Groups: Not Supported 00:10:15.677 Predictable Latency Mode: Not Supported 00:10:15.677 Traffic Based Keep ALive: Not Supported 00:10:15.677 Namespace Granularity: Not Supported 00:10:15.677 SQ Associations: Not Supported 00:10:15.677 UUID List: Not Supported 00:10:15.677 Multi-Domain Subsystem: Not Supported 00:10:15.677 Fixed Capacity Management: Not Supported 00:10:15.677 Variable Capacity Management: Not Supported 00:10:15.677 Delete Endurance Group: Not Supported 00:10:15.677 Delete NVM Set: Not Supported 00:10:15.677 Extended LBA Formats Supported: Not Supported 00:10:15.677 Flexible Data Placement Supported: Not Supported 00:10:15.677 00:10:15.677 Controller Memory Buffer Support 00:10:15.677 ================================ 00:10:15.677 Supported: No 00:10:15.677 00:10:15.677 Persistent Memory Region Support 00:10:15.677 ================================ 00:10:15.677 Supported: No 00:10:15.677 00:10:15.677 Admin Command Set Attributes 00:10:15.677 ============================ 00:10:15.677 Security Send/Receive: Not Supported 00:10:15.677 Format NVM: Not Supported 00:10:15.677 Firmware Activate/Download: Not Supported 00:10:15.677 Namespace Management: Not Supported 00:10:15.677 Device Self-Test: Not Supported 00:10:15.677 Directives: Not Supported 00:10:15.677 NVMe-MI: Not Supported 00:10:15.677 Virtualization Management: Not Supported 00:10:15.677 Doorbell Buffer Config: Not Supported 00:10:15.677 Get LBA Status Capability: Not Supported 00:10:15.677 Command & Feature Lockdown Capability: Not Supported 00:10:15.677 Abort Command Limit: 4 00:10:15.677 Async Event Request Limit: 4 00:10:15.677 Number of Firmware Slots: N/A 00:10:15.677 Firmware Slot 1 Read-Only: N/A 00:10:15.677 Firmware Activation Without Reset: N/A 00:10:15.677 Multiple Update Detection Support: N/A 00:10:15.677 Firmware Update Granularity: No Information Provided 00:10:15.677 Per-Namespace SMART Log: No 00:10:15.677 Asymmetric Namespace Access Log Page: Not Supported 00:10:15.677 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:10:15.677 Command Effects Log Page: Supported 00:10:15.677 Get Log Page Extended Data: Supported 00:10:15.677 Telemetry Log Pages: Not Supported 00:10:15.677 Persistent Event Log Pages: Not Supported 00:10:15.677 Supported Log Pages Log Page: May Support 00:10:15.677 Commands Supported & Effects Log Page: Not Supported 00:10:15.677 Feature Identifiers & Effects Log Page:May Support 00:10:15.677 NVMe-MI Commands & Effects Log Page: May Support 00:10:15.677 Data Area 4 for Telemetry Log: Not Supported 00:10:15.677 Error Log Page Entries Supported: 128 00:10:15.677 Keep Alive: Supported 00:10:15.677 Keep Alive Granularity: 10000 ms 00:10:15.677 00:10:15.677 NVM Command Set Attributes 00:10:15.677 ========================== 00:10:15.677 Submission Queue Entry Size 00:10:15.677 Max: 64 00:10:15.677 Min: 64 00:10:15.677 Completion Queue Entry Size 00:10:15.677 Max: 16 00:10:15.677 Min: 16 00:10:15.677 Number of Namespaces: 32 00:10:15.677 Compare Command: Supported 00:10:15.677 Write Uncorrectable Command: Not Supported 00:10:15.677 Dataset Management Command: Supported 00:10:15.677 Write Zeroes Command: Supported 00:10:15.677 Set Features Save Field: Not Supported 00:10:15.677 Reservations: Not Supported 00:10:15.677 Timestamp: Not Supported 00:10:15.677 Copy: Supported 00:10:15.677 Volatile Write Cache: Present 00:10:15.677 Atomic Write Unit (Normal): 1 00:10:15.677 Atomic Write Unit (PFail): 1 00:10:15.677 Atomic Compare & Write Unit: 1 00:10:15.677 Fused Compare & Write: Supported 00:10:15.677 Scatter-Gather List 00:10:15.677 SGL Command Set: Supported (Dword aligned) 00:10:15.677 SGL Keyed: Not Supported 00:10:15.677 SGL Bit Bucket Descriptor: Not Supported 00:10:15.677 SGL Metadata Pointer: Not Supported 00:10:15.677 Oversized SGL: Not Supported 00:10:15.677 SGL Metadata Address: Not Supported 00:10:15.677 SGL Offset: Not Supported 00:10:15.677 Transport SGL Data Block: Not Supported 00:10:15.677 Replay Protected Memory Block: Not Supported 00:10:15.677 00:10:15.677 Firmware Slot Information 00:10:15.677 ========================= 00:10:15.677 Active slot: 1 00:10:15.677 Slot 1 Firmware Revision: 24.05 00:10:15.677 00:10:15.677 00:10:15.677 Commands Supported and Effects 00:10:15.677 ============================== 00:10:15.677 Admin Commands 00:10:15.677 -------------- 00:10:15.677 Get Log Page (02h): Supported 00:10:15.677 Identify (06h): Supported 00:10:15.677 Abort (08h): Supported 00:10:15.677 Set Features (09h): Supported 00:10:15.677 Get Features (0Ah): Supported 00:10:15.678 Asynchronous Event Request (0Ch): Supported 00:10:15.678 Keep Alive (18h): Supported 00:10:15.678 I/O Commands 00:10:15.678 ------------ 00:10:15.678 Flush (00h): Supported LBA-Change 00:10:15.678 Write (01h): Supported LBA-Change 00:10:15.678 Read (02h): Supported 00:10:15.678 Compare (05h): Supported 00:10:15.678 Write Zeroes (08h): Supported LBA-Change 00:10:15.678 Dataset Management (09h): Supported LBA-Change 00:10:15.678 Copy (19h): Supported LBA-Change 00:10:15.678 Unknown (79h): Supported LBA-Change 00:10:15.678 Unknown (7Ah): Supported 00:10:15.678 00:10:15.678 Error Log 00:10:15.678 ========= 00:10:15.678 00:10:15.678 Arbitration 00:10:15.678 =========== 00:10:15.678 Arbitration Burst: 1 00:10:15.678 00:10:15.678 Power Management 00:10:15.678 ================ 00:10:15.678 Number of Power States: 1 00:10:15.678 Current Power State: Power State #0 00:10:15.678 Power State #0: 00:10:15.678 Max Power: 0.00 W 00:10:15.678 Non-Operational State: Operational 00:10:15.678 Entry Latency: Not Reported 00:10:15.678 Exit Latency: Not Reported 00:10:15.678 Relative Read Throughput: 0 00:10:15.678 Relative Read Latency: 0 00:10:15.678 Relative Write Throughput: 0 00:10:15.678 Relative Write Latency: 0 00:10:15.678 Idle Power: Not Reported 00:10:15.678 Active Power: Not Reported 00:10:15.678 Non-Operational Permissive Mode: Not Supported 00:10:15.678 00:10:15.678 Health Information 00:10:15.678 ================== 00:10:15.678 Critical Warnings: 00:10:15.678 Available Spare Space: OK 00:10:15.678 Temperature: OK 00:10:15.678 Device Reliability: OK 00:10:15.678 Read Only: No 00:10:15.678 Volatile Memory Backup: OK 00:10:15.678 Current Temperature: 0 Kelvin (-2[2024-05-15 00:57:27.853128] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:15.678 [2024-05-15 00:57:27.860939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:15.678 [2024-05-15 00:57:27.860986] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:10:15.678 [2024-05-15 00:57:27.861009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.678 [2024-05-15 00:57:27.861021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.678 [2024-05-15 00:57:27.861032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.678 [2024-05-15 00:57:27.861042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.678 [2024-05-15 00:57:27.861127] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:15.678 [2024-05-15 00:57:27.861150] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:10:15.678 [2024-05-15 00:57:27.862132] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:15.678 [2024-05-15 00:57:27.862206] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:10:15.678 [2024-05-15 00:57:27.862222] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:10:15.678 [2024-05-15 00:57:27.863135] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:10:15.678 [2024-05-15 00:57:27.863159] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:10:15.678 [2024-05-15 00:57:27.863214] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:10:15.678 [2024-05-15 00:57:27.864416] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:15.678 73 Celsius) 00:10:15.678 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:15.678 Available Spare: 0% 00:10:15.678 Available Spare Threshold: 0% 00:10:15.678 Life Percentage Used: 0% 00:10:15.678 Data Units Read: 0 00:10:15.678 Data Units Written: 0 00:10:15.678 Host Read Commands: 0 00:10:15.678 Host Write Commands: 0 00:10:15.678 Controller Busy Time: 0 minutes 00:10:15.678 Power Cycles: 0 00:10:15.678 Power On Hours: 0 hours 00:10:15.678 Unsafe Shutdowns: 0 00:10:15.678 Unrecoverable Media Errors: 0 00:10:15.678 Lifetime Error Log Entries: 0 00:10:15.678 Warning Temperature Time: 0 minutes 00:10:15.678 Critical Temperature Time: 0 minutes 00:10:15.678 00:10:15.678 Number of Queues 00:10:15.678 ================ 00:10:15.678 Number of I/O Submission Queues: 127 00:10:15.678 Number of I/O Completion Queues: 127 00:10:15.678 00:10:15.678 Active Namespaces 00:10:15.678 ================= 00:10:15.678 Namespace ID:1 00:10:15.678 Error Recovery Timeout: Unlimited 00:10:15.678 Command Set Identifier: NVM (00h) 00:10:15.678 Deallocate: Supported 00:10:15.678 Deallocated/Unwritten Error: Not Supported 00:10:15.678 Deallocated Read Value: Unknown 00:10:15.678 Deallocate in Write Zeroes: Not Supported 00:10:15.678 Deallocated Guard Field: 0xFFFF 00:10:15.678 Flush: Supported 00:10:15.678 Reservation: Supported 00:10:15.678 Namespace Sharing Capabilities: Multiple Controllers 00:10:15.678 Size (in LBAs): 131072 (0GiB) 00:10:15.678 Capacity (in LBAs): 131072 (0GiB) 00:10:15.678 Utilization (in LBAs): 131072 (0GiB) 00:10:15.678 NGUID: 8B75FDEE5681472BB069D22194375716 00:10:15.678 UUID: 8b75fdee-5681-472b-b069-d22194375716 00:10:15.678 Thin Provisioning: Not Supported 00:10:15.678 Per-NS Atomic Units: Yes 00:10:15.678 Atomic Boundary Size (Normal): 0 00:10:15.678 Atomic Boundary Size (PFail): 0 00:10:15.678 Atomic Boundary Offset: 0 00:10:15.678 Maximum Single Source Range Length: 65535 00:10:15.678 Maximum Copy Length: 65535 00:10:15.678 Maximum Source Range Count: 1 00:10:15.678 NGUID/EUI64 Never Reused: No 00:10:15.678 Namespace Write Protected: No 00:10:15.678 Number of LBA Formats: 1 00:10:15.678 Current LBA Format: LBA Format #00 00:10:15.678 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:15.678 00:10:15.678 00:57:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:15.678 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.937 [2024-05-15 00:57:28.092022] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:21.218 Initializing NVMe Controllers 00:10:21.218 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:21.218 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:21.218 Initialization complete. Launching workers. 00:10:21.218 ======================================================== 00:10:21.218 Latency(us) 00:10:21.218 Device Information : IOPS MiB/s Average min max 00:10:21.218 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34357.40 134.21 3724.83 1172.47 7774.81 00:10:21.218 ======================================================== 00:10:21.218 Total : 34357.40 134.21 3724.83 1172.47 7774.81 00:10:21.218 00:10:21.218 [2024-05-15 00:57:33.199313] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:21.218 00:57:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:21.218 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.218 [2024-05-15 00:57:33.434971] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:26.505 Initializing NVMe Controllers 00:10:26.505 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:26.505 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:26.505 Initialization complete. Launching workers. 00:10:26.505 ======================================================== 00:10:26.505 Latency(us) 00:10:26.505 Device Information : IOPS MiB/s Average min max 00:10:26.505 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32528.80 127.07 3936.63 1215.18 9832.63 00:10:26.505 ======================================================== 00:10:26.505 Total : 32528.80 127.07 3936.63 1215.18 9832.63 00:10:26.505 00:10:26.505 [2024-05-15 00:57:38.456963] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:26.505 00:57:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:26.505 EAL: No free 2048 kB hugepages reported on node 1 00:10:26.505 [2024-05-15 00:57:38.678656] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:31.774 [2024-05-15 00:57:43.811088] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:31.774 Initializing NVMe Controllers 00:10:31.774 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:31.774 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:31.774 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:10:31.774 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:10:31.774 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:10:31.774 Initialization complete. Launching workers. 00:10:31.774 Starting thread on core 2 00:10:31.774 Starting thread on core 3 00:10:31.774 Starting thread on core 1 00:10:31.774 00:57:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:10:31.774 EAL: No free 2048 kB hugepages reported on node 1 00:10:31.774 [2024-05-15 00:57:44.144499] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:35.064 [2024-05-15 00:57:47.207359] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:35.064 Initializing NVMe Controllers 00:10:35.064 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:35.064 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:35.064 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:10:35.064 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:10:35.064 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:10:35.064 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:10:35.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:35.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:35.064 Initialization complete. Launching workers. 00:10:35.064 Starting thread on core 1 with urgent priority queue 00:10:35.064 Starting thread on core 2 with urgent priority queue 00:10:35.064 Starting thread on core 3 with urgent priority queue 00:10:35.064 Starting thread on core 0 with urgent priority queue 00:10:35.064 SPDK bdev Controller (SPDK2 ) core 0: 5346.33 IO/s 18.70 secs/100000 ios 00:10:35.064 SPDK bdev Controller (SPDK2 ) core 1: 5225.00 IO/s 19.14 secs/100000 ios 00:10:35.064 SPDK bdev Controller (SPDK2 ) core 2: 5774.00 IO/s 17.32 secs/100000 ios 00:10:35.064 SPDK bdev Controller (SPDK2 ) core 3: 5938.67 IO/s 16.84 secs/100000 ios 00:10:35.064 ======================================================== 00:10:35.064 00:10:35.064 00:57:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:35.064 EAL: No free 2048 kB hugepages reported on node 1 00:10:35.322 [2024-05-15 00:57:47.520416] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:35.322 Initializing NVMe Controllers 00:10:35.322 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:35.322 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:35.322 Namespace ID: 1 size: 0GB 00:10:35.322 Initialization complete. 00:10:35.322 INFO: using host memory buffer for IO 00:10:35.322 Hello world! 00:10:35.322 [2024-05-15 00:57:47.529506] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:35.322 00:57:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:35.322 EAL: No free 2048 kB hugepages reported on node 1 00:10:35.582 [2024-05-15 00:57:47.830182] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:36.961 Initializing NVMe Controllers 00:10:36.961 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:36.961 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:36.961 Initialization complete. Launching workers. 00:10:36.961 submit (in ns) avg, min, max = 8671.7, 3504.4, 5997002.2 00:10:36.962 complete (in ns) avg, min, max = 25293.2, 2056.7, 4016771.1 00:10:36.962 00:10:36.962 Submit histogram 00:10:36.962 ================ 00:10:36.962 Range in us Cumulative Count 00:10:36.962 3.484 - 3.508: 0.0076% ( 1) 00:10:36.962 3.508 - 3.532: 0.2807% ( 36) 00:10:36.962 3.532 - 3.556: 0.7891% ( 67) 00:10:36.962 3.556 - 3.579: 2.6783% ( 249) 00:10:36.962 3.579 - 3.603: 6.6540% ( 524) 00:10:36.962 3.603 - 3.627: 13.0349% ( 841) 00:10:36.962 3.627 - 3.650: 21.3733% ( 1099) 00:10:36.962 3.650 - 3.674: 31.4264% ( 1325) 00:10:36.962 3.674 - 3.698: 40.1973% ( 1156) 00:10:36.962 3.698 - 3.721: 48.2473% ( 1061) 00:10:36.962 3.721 - 3.745: 53.4598% ( 687) 00:10:36.962 3.745 - 3.769: 57.4583% ( 527) 00:10:36.962 3.769 - 3.793: 61.1836% ( 491) 00:10:36.962 3.793 - 3.816: 64.4537% ( 431) 00:10:36.962 3.816 - 3.840: 67.5797% ( 412) 00:10:36.962 3.840 - 3.864: 71.2367% ( 482) 00:10:36.962 3.864 - 3.887: 75.2049% ( 523) 00:10:36.962 3.887 - 3.911: 79.0364% ( 505) 00:10:36.962 3.911 - 3.935: 82.6631% ( 478) 00:10:36.962 3.935 - 3.959: 85.4097% ( 362) 00:10:36.962 3.959 - 3.982: 87.4052% ( 263) 00:10:36.962 3.982 - 4.006: 89.0971% ( 223) 00:10:36.962 4.006 - 4.030: 90.1821% ( 143) 00:10:36.962 4.030 - 4.053: 91.2974% ( 147) 00:10:36.962 4.053 - 4.077: 92.1320% ( 110) 00:10:36.962 4.077 - 4.101: 93.0046% ( 115) 00:10:36.962 4.101 - 4.124: 93.9074% ( 119) 00:10:36.962 4.124 - 4.148: 94.6434% ( 97) 00:10:36.962 4.148 - 4.172: 95.1745% ( 70) 00:10:36.962 4.172 - 4.196: 95.4780% ( 40) 00:10:36.962 4.196 - 4.219: 95.8346% ( 47) 00:10:36.962 4.219 - 4.243: 96.1381% ( 40) 00:10:36.962 4.243 - 4.267: 96.3354% ( 26) 00:10:36.962 4.267 - 4.290: 96.5706% ( 31) 00:10:36.962 4.290 - 4.314: 96.7071% ( 18) 00:10:36.962 4.314 - 4.338: 96.8058% ( 13) 00:10:36.962 4.338 - 4.361: 96.9196% ( 15) 00:10:36.962 4.361 - 4.385: 97.0334% ( 15) 00:10:36.962 4.385 - 4.409: 97.0713% ( 5) 00:10:36.962 4.409 - 4.433: 97.1320% ( 8) 00:10:36.962 4.433 - 4.456: 97.1775% ( 6) 00:10:36.962 4.456 - 4.480: 97.2307% ( 7) 00:10:36.962 4.480 - 4.504: 97.2914% ( 8) 00:10:36.962 4.504 - 4.527: 97.3065% ( 2) 00:10:36.962 4.527 - 4.551: 97.3293% ( 3) 00:10:36.962 4.551 - 4.575: 97.3369% ( 1) 00:10:36.962 4.575 - 4.599: 97.3445% ( 1) 00:10:36.962 4.599 - 4.622: 97.3520% ( 1) 00:10:36.962 4.622 - 4.646: 97.3596% ( 1) 00:10:36.962 4.670 - 4.693: 97.3672% ( 1) 00:10:36.962 4.693 - 4.717: 97.3976% ( 4) 00:10:36.962 4.741 - 4.764: 97.4052% ( 1) 00:10:36.962 4.788 - 4.812: 97.4127% ( 1) 00:10:36.962 4.812 - 4.836: 97.4203% ( 1) 00:10:36.962 4.836 - 4.859: 97.4279% ( 1) 00:10:36.962 4.859 - 4.883: 97.4659% ( 5) 00:10:36.962 4.883 - 4.907: 97.4886% ( 3) 00:10:36.962 4.907 - 4.930: 97.5569% ( 9) 00:10:36.962 4.930 - 4.954: 97.5721% ( 2) 00:10:36.962 4.954 - 4.978: 97.6100% ( 5) 00:10:36.962 4.978 - 5.001: 97.6404% ( 4) 00:10:36.962 5.001 - 5.025: 97.6859% ( 6) 00:10:36.962 5.025 - 5.049: 97.7011% ( 2) 00:10:36.962 5.049 - 5.073: 97.7542% ( 7) 00:10:36.962 5.073 - 5.096: 97.8300% ( 10) 00:10:36.962 5.096 - 5.120: 97.8604% ( 4) 00:10:36.962 5.120 - 5.144: 97.8680% ( 1) 00:10:36.962 5.144 - 5.167: 97.9287% ( 8) 00:10:36.962 5.167 - 5.191: 97.9363% ( 1) 00:10:36.962 5.191 - 5.215: 97.9818% ( 6) 00:10:36.962 5.215 - 5.239: 98.0046% ( 3) 00:10:36.962 5.239 - 5.262: 98.0197% ( 2) 00:10:36.962 5.262 - 5.286: 98.0653% ( 6) 00:10:36.962 5.286 - 5.310: 98.1032% ( 5) 00:10:36.962 5.310 - 5.333: 98.1259% ( 3) 00:10:36.962 5.333 - 5.357: 98.1487% ( 3) 00:10:36.962 5.357 - 5.381: 98.1791% ( 4) 00:10:36.962 5.381 - 5.404: 98.1942% ( 2) 00:10:36.962 5.404 - 5.428: 98.2018% ( 1) 00:10:36.962 5.428 - 5.452: 98.2246% ( 3) 00:10:36.962 5.452 - 5.476: 98.2322% ( 1) 00:10:36.962 5.499 - 5.523: 98.2398% ( 1) 00:10:36.962 5.570 - 5.594: 98.2473% ( 1) 00:10:36.962 5.594 - 5.618: 98.2625% ( 2) 00:10:36.962 5.618 - 5.641: 98.2701% ( 1) 00:10:36.962 5.641 - 5.665: 98.2929% ( 3) 00:10:36.962 5.665 - 5.689: 98.3005% ( 1) 00:10:36.962 5.713 - 5.736: 98.3080% ( 1) 00:10:36.962 5.736 - 5.760: 98.3156% ( 1) 00:10:36.962 5.784 - 5.807: 98.3232% ( 1) 00:10:36.962 5.831 - 5.855: 98.3308% ( 1) 00:10:36.962 6.021 - 6.044: 98.3384% ( 1) 00:10:36.962 6.068 - 6.116: 98.3460% ( 1) 00:10:36.962 6.210 - 6.258: 98.3536% ( 1) 00:10:36.962 6.258 - 6.305: 98.3612% ( 1) 00:10:36.962 6.447 - 6.495: 98.3687% ( 1) 00:10:36.962 6.495 - 6.542: 98.3763% ( 1) 00:10:36.962 6.542 - 6.590: 98.3839% ( 1) 00:10:36.962 6.779 - 6.827: 98.3915% ( 1) 00:10:36.962 6.874 - 6.921: 98.3991% ( 1) 00:10:36.962 6.921 - 6.969: 98.4067% ( 1) 00:10:36.962 6.969 - 7.016: 98.4143% ( 1) 00:10:36.962 7.064 - 7.111: 98.4219% ( 1) 00:10:36.962 7.111 - 7.159: 98.4294% ( 1) 00:10:36.962 7.206 - 7.253: 98.4370% ( 1) 00:10:36.962 7.253 - 7.301: 98.4446% ( 1) 00:10:36.962 7.348 - 7.396: 98.4598% ( 2) 00:10:36.962 7.443 - 7.490: 98.4674% ( 1) 00:10:36.962 7.538 - 7.585: 98.4750% ( 1) 00:10:36.962 7.585 - 7.633: 98.4825% ( 1) 00:10:36.962 7.680 - 7.727: 98.4901% ( 1) 00:10:36.962 7.917 - 7.964: 98.5205% ( 4) 00:10:36.962 8.107 - 8.154: 98.5281% ( 1) 00:10:36.962 8.154 - 8.201: 98.5357% ( 1) 00:10:36.962 8.296 - 8.344: 98.5584% ( 3) 00:10:36.962 8.486 - 8.533: 98.5660% ( 1) 00:10:36.962 8.581 - 8.628: 98.5736% ( 1) 00:10:36.962 8.676 - 8.723: 98.5812% ( 1) 00:10:36.962 8.723 - 8.770: 98.5888% ( 1) 00:10:36.962 9.007 - 9.055: 98.5964% ( 1) 00:10:36.962 9.055 - 9.102: 98.6039% ( 1) 00:10:36.962 9.150 - 9.197: 98.6115% ( 1) 00:10:36.962 9.387 - 9.434: 98.6267% ( 2) 00:10:36.962 9.434 - 9.481: 98.6343% ( 1) 00:10:36.962 9.671 - 9.719: 98.6419% ( 1) 00:10:36.962 9.719 - 9.766: 98.6571% ( 2) 00:10:36.962 9.766 - 9.813: 98.6646% ( 1) 00:10:36.962 10.003 - 10.050: 98.6722% ( 1) 00:10:36.962 10.050 - 10.098: 98.6798% ( 1) 00:10:36.962 10.098 - 10.145: 98.6874% ( 1) 00:10:36.962 10.193 - 10.240: 98.6950% ( 1) 00:10:36.962 11.141 - 11.188: 98.7026% ( 1) 00:10:36.962 11.330 - 11.378: 98.7102% ( 1) 00:10:36.962 11.425 - 11.473: 98.7178% ( 1) 00:10:36.962 11.520 - 11.567: 98.7253% ( 1) 00:10:36.962 11.567 - 11.615: 98.7329% ( 1) 00:10:36.962 11.757 - 11.804: 98.7481% ( 2) 00:10:36.962 11.947 - 11.994: 98.7557% ( 1) 00:10:36.962 12.041 - 12.089: 98.7633% ( 1) 00:10:36.962 12.326 - 12.421: 98.7785% ( 2) 00:10:36.962 12.516 - 12.610: 98.7860% ( 1) 00:10:36.962 12.610 - 12.705: 98.7936% ( 1) 00:10:36.962 13.084 - 13.179: 98.8164% ( 3) 00:10:36.962 13.179 - 13.274: 98.8316% ( 2) 00:10:36.962 13.464 - 13.559: 98.8392% ( 1) 00:10:36.962 13.748 - 13.843: 98.8467% ( 1) 00:10:36.962 14.033 - 14.127: 98.8543% ( 1) 00:10:36.962 14.507 - 14.601: 98.8619% ( 1) 00:10:36.962 14.601 - 14.696: 98.8695% ( 1) 00:10:36.962 15.076 - 15.170: 98.8771% ( 1) 00:10:36.962 15.360 - 15.455: 98.8847% ( 1) 00:10:36.962 17.067 - 17.161: 98.8923% ( 1) 00:10:36.962 17.161 - 17.256: 98.9074% ( 2) 00:10:36.962 17.256 - 17.351: 98.9454% ( 5) 00:10:36.962 17.351 - 17.446: 98.9681% ( 3) 00:10:36.962 17.446 - 17.541: 98.9985% ( 4) 00:10:36.962 17.541 - 17.636: 99.0364% ( 5) 00:10:36.962 17.636 - 17.730: 99.0592% ( 3) 00:10:36.962 17.730 - 17.825: 99.1199% ( 8) 00:10:36.962 17.825 - 17.920: 99.1730% ( 7) 00:10:36.962 17.920 - 18.015: 99.2489% ( 10) 00:10:36.962 18.015 - 18.110: 99.2716% ( 3) 00:10:36.962 18.110 - 18.204: 99.3551% ( 11) 00:10:36.962 18.204 - 18.299: 99.4158% ( 8) 00:10:36.962 18.299 - 18.394: 99.4841% ( 9) 00:10:36.962 18.394 - 18.489: 99.5372% ( 7) 00:10:36.962 18.489 - 18.584: 99.5979% ( 8) 00:10:36.962 18.584 - 18.679: 99.6358% ( 5) 00:10:36.962 18.679 - 18.773: 99.6889% ( 7) 00:10:36.962 18.773 - 18.868: 99.7117% ( 3) 00:10:36.962 18.868 - 18.963: 99.7420% ( 4) 00:10:36.962 18.963 - 19.058: 99.7496% ( 1) 00:10:36.962 19.058 - 19.153: 99.7648% ( 2) 00:10:36.962 19.247 - 19.342: 99.7724% ( 1) 00:10:36.962 19.342 - 19.437: 99.7800% ( 1) 00:10:36.962 19.437 - 19.532: 99.7876% ( 1) 00:10:36.962 19.627 - 19.721: 99.8027% ( 2) 00:10:36.962 19.721 - 19.816: 99.8103% ( 1) 00:10:36.962 19.911 - 20.006: 99.8179% ( 1) 00:10:36.962 20.196 - 20.290: 99.8331% ( 2) 00:10:36.962 20.764 - 20.859: 99.8407% ( 1) 00:10:36.962 21.523 - 21.618: 99.8483% ( 1) 00:10:36.962 21.807 - 21.902: 99.8558% ( 1) 00:10:36.962 22.092 - 22.187: 99.8634% ( 1) 00:10:36.962 23.135 - 23.230: 99.8710% ( 1) 00:10:36.962 23.514 - 23.609: 99.8786% ( 1) 00:10:36.962 23.893 - 23.988: 99.8862% ( 1) 00:10:36.962 3980.705 - 4004.978: 99.9772% ( 12) 00:10:36.962 4004.978 - 4029.250: 99.9924% ( 2) 00:10:36.962 5995.330 - 6019.603: 100.0000% ( 1) 00:10:36.962 00:10:36.963 Complete histogram 00:10:36.963 ================== 00:10:36.963 Range in us Cumulative Count 00:10:36.963 2.050 - 2.062: 0.5311% ( 70) 00:10:36.963 2.062 - 2.074: 29.5068% ( 3819) 00:10:36.963 2.074 - 2.086: 49.9165% ( 2690) 00:10:36.963 2.086 - 2.098: 51.1077% ( 157) 00:10:36.963 2.098 - 2.110: 55.8725% ( 628) 00:10:36.963 2.110 - 2.121: 58.9226% ( 402) 00:10:36.963 2.121 - 2.133: 61.8892% ( 391) 00:10:36.963 2.133 - 2.145: 73.1715% ( 1487) 00:10:36.963 2.145 - 2.157: 77.2003% ( 531) 00:10:36.963 2.157 - 2.169: 78.0046% ( 106) 00:10:36.963 2.169 - 2.181: 79.9090% ( 251) 00:10:36.963 2.181 - 2.193: 81.0319% ( 148) 00:10:36.963 2.193 - 2.204: 82.3217% ( 170) 00:10:36.963 2.204 - 2.216: 86.9196% ( 606) 00:10:36.963 2.216 - 2.228: 89.7648% ( 375) 00:10:36.963 2.228 - 2.240: 91.2140% ( 191) 00:10:36.963 2.240 - 2.252: 92.2838% ( 141) 00:10:36.963 2.252 - 2.264: 92.8832% ( 79) 00:10:36.963 2.264 - 2.276: 93.1487% ( 35) 00:10:36.963 2.276 - 2.287: 93.6495% ( 66) 00:10:36.963 2.287 - 2.299: 94.2033% ( 73) 00:10:36.963 2.299 - 2.311: 94.8027% ( 79) 00:10:36.963 2.311 - 2.323: 95.0531% ( 33) 00:10:36.963 2.323 - 2.335: 95.1517% ( 13) 00:10:36.963 2.335 - 2.347: 95.2124% ( 8) 00:10:36.963 2.347 - 2.359: 95.3035% ( 12) 00:10:36.963 2.359 - 2.370: 95.4173% ( 15) 00:10:36.963 2.370 - 2.382: 95.6373% ( 29) 00:10:36.963 2.382 - 2.394: 95.8118% ( 23) 00:10:36.963 2.394 - 2.406: 95.9408% ( 17) 00:10:36.963 2.406 - 2.418: 96.0774% ( 18) 00:10:36.963 2.418 - 2.430: 96.2443% ( 22) 00:10:36.963 2.430 - 2.441: 96.4036% ( 21) 00:10:36.963 2.441 - 2.453: 96.5402% ( 18) 00:10:36.963 2.453 - 2.465: 96.7071% ( 22) 00:10:36.963 2.465 - 2.477: 96.8968% ( 25) 00:10:36.963 2.477 - 2.489: 97.0561% ( 21) 00:10:36.963 2.489 - 2.501: 97.2762% ( 29) 00:10:36.963 2.501 - 2.513: 97.4127% ( 18) 00:10:36.963 2.513 - 2.524: 97.5721% ( 21) 00:10:36.963 2.524 - 2.536: 97.7011% ( 17) 00:10:36.963 2.536 - 2.548: 97.8528% ( 20) 00:10:36.963 2.548 - 2.560: 97.9439% ( 12) 00:10:36.963 2.560 - 2.572: 98.0273% ( 11) 00:10:36.963 2.572 - 2.584: 98.0956% ( 9) 00:10:36.963 2.584 - 2.596: 98.1259% ( 4) 00:10:36.963 2.596 - 2.607: 98.1639% ( 5) 00:10:36.963 2.607 - 2.619: 98.1866% ( 3) 00:10:36.963 2.619 - 2.631: 98.2018% ( 2) 00:10:36.963 2.631 - 2.643: 98.2322% ( 4) 00:10:36.963 2.655 - 2.667: 98.2398% ( 1) 00:10:36.963 2.667 - 2.679: 98.2473% ( 1) 00:10:36.963 2.690 - 2.702: 98.2549% ( 1) 00:10:36.963 2.714 - 2.726: 98.2701% ( 2) 00:10:36.963 2.738 - 2.750: 98.2777% ( 1) 00:10:36.963 2.797 - 2.809: 98.3005% ( 3) 00:10:36.963 2.821 - 2.833: 98.3080% ( 1) 00:10:36.963 2.833 - 2.844: 98.3156% ( 1) 00:10:36.963 2.844 - 2.856: 98.3308% ( 2) 00:10:36.963 2.856 - 2.868: 98.3536% ( 3) 00:10:36.963 2.868 - 2.880: 98.3612% ( 1) 00:10:36.963 3.081 - 3.105: 9[2024-05-15 00:57:48.926693] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:36.963 8.3763% ( 2) 00:10:36.963 3.271 - 3.295: 98.3839% ( 1) 00:10:36.963 3.295 - 3.319: 98.3915% ( 1) 00:10:36.963 3.319 - 3.342: 98.3991% ( 1) 00:10:36.963 3.366 - 3.390: 98.4067% ( 1) 00:10:36.963 3.413 - 3.437: 98.4143% ( 1) 00:10:36.963 3.437 - 3.461: 98.4219% ( 1) 00:10:36.963 3.461 - 3.484: 98.4370% ( 2) 00:10:36.963 3.484 - 3.508: 98.4446% ( 1) 00:10:36.963 3.532 - 3.556: 98.4750% ( 4) 00:10:36.963 3.556 - 3.579: 98.4825% ( 1) 00:10:36.963 3.579 - 3.603: 98.4901% ( 1) 00:10:36.963 3.627 - 3.650: 98.4977% ( 1) 00:10:36.963 3.698 - 3.721: 98.5205% ( 3) 00:10:36.963 3.745 - 3.769: 98.5432% ( 3) 00:10:36.963 3.769 - 3.793: 98.5508% ( 1) 00:10:36.963 3.816 - 3.840: 98.5584% ( 1) 00:10:36.963 3.840 - 3.864: 98.5660% ( 1) 00:10:36.963 3.864 - 3.887: 98.5736% ( 1) 00:10:36.963 3.887 - 3.911: 98.5812% ( 1) 00:10:36.963 4.006 - 4.030: 98.5888% ( 1) 00:10:36.963 4.030 - 4.053: 98.5964% ( 1) 00:10:36.963 4.290 - 4.314: 98.6039% ( 1) 00:10:36.963 4.812 - 4.836: 98.6115% ( 1) 00:10:36.963 5.025 - 5.049: 98.6267% ( 2) 00:10:36.963 5.167 - 5.191: 98.6343% ( 1) 00:10:36.963 5.191 - 5.215: 98.6419% ( 1) 00:10:36.963 5.215 - 5.239: 98.6495% ( 1) 00:10:36.963 5.310 - 5.333: 98.6571% ( 1) 00:10:36.963 5.333 - 5.357: 98.6646% ( 1) 00:10:36.963 5.594 - 5.618: 98.6722% ( 1) 00:10:36.963 5.665 - 5.689: 98.6798% ( 1) 00:10:36.963 5.689 - 5.713: 98.7026% ( 3) 00:10:36.963 5.831 - 5.855: 98.7253% ( 3) 00:10:36.963 5.879 - 5.902: 98.7329% ( 1) 00:10:36.963 5.926 - 5.950: 98.7405% ( 1) 00:10:36.963 5.950 - 5.973: 98.7557% ( 2) 00:10:36.963 5.997 - 6.021: 98.7633% ( 1) 00:10:36.963 6.068 - 6.116: 98.7709% ( 1) 00:10:36.963 6.116 - 6.163: 98.7785% ( 1) 00:10:36.963 6.163 - 6.210: 98.7860% ( 1) 00:10:36.963 6.258 - 6.305: 98.7936% ( 1) 00:10:36.963 6.305 - 6.353: 98.8012% ( 1) 00:10:36.963 6.400 - 6.447: 98.8088% ( 1) 00:10:36.963 6.495 - 6.542: 98.8164% ( 1) 00:10:36.963 7.585 - 7.633: 98.8240% ( 1) 00:10:36.963 7.964 - 8.012: 98.8316% ( 1) 00:10:36.963 10.524 - 10.572: 98.8392% ( 1) 00:10:36.963 15.360 - 15.455: 98.8467% ( 1) 00:10:36.963 15.455 - 15.550: 98.8543% ( 1) 00:10:36.963 15.739 - 15.834: 98.8998% ( 6) 00:10:36.963 15.834 - 15.929: 98.9454% ( 6) 00:10:36.963 15.929 - 16.024: 98.9681% ( 3) 00:10:36.963 16.024 - 16.119: 99.0137% ( 6) 00:10:36.963 16.119 - 16.213: 99.0744% ( 8) 00:10:36.963 16.213 - 16.308: 99.1047% ( 4) 00:10:36.963 16.308 - 16.403: 99.1123% ( 1) 00:10:36.963 16.403 - 16.498: 99.1199% ( 1) 00:10:36.963 16.498 - 16.593: 99.1882% ( 9) 00:10:36.963 16.593 - 16.687: 99.2337% ( 6) 00:10:36.963 16.687 - 16.782: 99.2716% ( 5) 00:10:36.963 16.782 - 16.877: 99.2792% ( 1) 00:10:36.963 16.877 - 16.972: 99.3096% ( 4) 00:10:36.963 16.972 - 17.067: 99.3171% ( 1) 00:10:36.963 17.161 - 17.256: 99.3399% ( 3) 00:10:36.963 17.256 - 17.351: 99.3475% ( 1) 00:10:36.963 17.351 - 17.446: 99.3627% ( 2) 00:10:36.963 17.446 - 17.541: 99.3703% ( 1) 00:10:36.963 17.541 - 17.636: 99.3854% ( 2) 00:10:36.963 17.636 - 17.730: 99.3930% ( 1) 00:10:36.963 17.730 - 17.825: 99.4006% ( 1) 00:10:36.963 17.920 - 18.015: 99.4158% ( 2) 00:10:36.963 19.721 - 19.816: 99.4234% ( 1) 00:10:36.963 3980.705 - 4004.978: 99.8710% ( 59) 00:10:36.963 4004.978 - 4029.250: 100.0000% ( 17) 00:10:36.963 00:10:36.963 00:57:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:10:36.963 00:57:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:36.963 00:57:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:10:36.963 00:57:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:10:36.963 00:57:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:36.963 [ 00:10:36.963 { 00:10:36.963 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:36.963 "subtype": "Discovery", 00:10:36.963 "listen_addresses": [], 00:10:36.963 "allow_any_host": true, 00:10:36.963 "hosts": [] 00:10:36.963 }, 00:10:36.963 { 00:10:36.963 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:36.963 "subtype": "NVMe", 00:10:36.963 "listen_addresses": [ 00:10:36.963 { 00:10:36.963 "trtype": "VFIOUSER", 00:10:36.963 "adrfam": "IPv4", 00:10:36.963 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:36.963 "trsvcid": "0" 00:10:36.963 } 00:10:36.963 ], 00:10:36.963 "allow_any_host": true, 00:10:36.963 "hosts": [], 00:10:36.963 "serial_number": "SPDK1", 00:10:36.963 "model_number": "SPDK bdev Controller", 00:10:36.963 "max_namespaces": 32, 00:10:36.963 "min_cntlid": 1, 00:10:36.963 "max_cntlid": 65519, 00:10:36.963 "namespaces": [ 00:10:36.963 { 00:10:36.963 "nsid": 1, 00:10:36.963 "bdev_name": "Malloc1", 00:10:36.963 "name": "Malloc1", 00:10:36.963 "nguid": "3FC929C1BA81446ABE66C62CC1A7B978", 00:10:36.963 "uuid": "3fc929c1-ba81-446a-be66-c62cc1a7b978" 00:10:36.963 }, 00:10:36.963 { 00:10:36.963 "nsid": 2, 00:10:36.963 "bdev_name": "Malloc3", 00:10:36.963 "name": "Malloc3", 00:10:36.963 "nguid": "E9E94AB53E3B48CC8057400DA700D55C", 00:10:36.963 "uuid": "e9e94ab5-3e3b-48cc-8057-400da700d55c" 00:10:36.963 } 00:10:36.963 ] 00:10:36.963 }, 00:10:36.963 { 00:10:36.963 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:36.963 "subtype": "NVMe", 00:10:36.963 "listen_addresses": [ 00:10:36.963 { 00:10:36.963 "trtype": "VFIOUSER", 00:10:36.963 "adrfam": "IPv4", 00:10:36.963 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:36.963 "trsvcid": "0" 00:10:36.963 } 00:10:36.963 ], 00:10:36.963 "allow_any_host": true, 00:10:36.963 "hosts": [], 00:10:36.963 "serial_number": "SPDK2", 00:10:36.963 "model_number": "SPDK bdev Controller", 00:10:36.963 "max_namespaces": 32, 00:10:36.963 "min_cntlid": 1, 00:10:36.963 "max_cntlid": 65519, 00:10:36.964 "namespaces": [ 00:10:36.964 { 00:10:36.964 "nsid": 1, 00:10:36.964 "bdev_name": "Malloc2", 00:10:36.964 "name": "Malloc2", 00:10:36.964 "nguid": "8B75FDEE5681472BB069D22194375716", 00:10:36.964 "uuid": "8b75fdee-5681-472b-b069-d22194375716" 00:10:36.964 } 00:10:36.964 ] 00:10:36.964 } 00:10:36.964 ] 00:10:36.964 00:57:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:36.964 00:57:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1203515 00:10:36.964 00:57:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:10:36.964 00:57:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:36.964 00:57:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:10:36.964 00:57:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:36.964 00:57:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:36.964 00:57:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:10:36.964 00:57:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:36.964 00:57:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:10:36.964 EAL: No free 2048 kB hugepages reported on node 1 00:10:37.222 [2024-05-15 00:57:49.368539] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:37.222 Malloc4 00:10:37.222 00:57:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:10:37.481 [2024-05-15 00:57:49.725342] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:37.481 00:57:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:37.481 Asynchronous Event Request test 00:10:37.481 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:37.481 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:37.481 Registering asynchronous event callbacks... 00:10:37.481 Starting namespace attribute notice tests for all controllers... 00:10:37.481 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:37.481 aer_cb - Changed Namespace 00:10:37.481 Cleaning up... 00:10:37.740 [ 00:10:37.740 { 00:10:37.740 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:37.740 "subtype": "Discovery", 00:10:37.740 "listen_addresses": [], 00:10:37.740 "allow_any_host": true, 00:10:37.740 "hosts": [] 00:10:37.740 }, 00:10:37.740 { 00:10:37.740 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:37.740 "subtype": "NVMe", 00:10:37.740 "listen_addresses": [ 00:10:37.740 { 00:10:37.740 "trtype": "VFIOUSER", 00:10:37.740 "adrfam": "IPv4", 00:10:37.740 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:37.740 "trsvcid": "0" 00:10:37.740 } 00:10:37.740 ], 00:10:37.740 "allow_any_host": true, 00:10:37.740 "hosts": [], 00:10:37.740 "serial_number": "SPDK1", 00:10:37.740 "model_number": "SPDK bdev Controller", 00:10:37.740 "max_namespaces": 32, 00:10:37.740 "min_cntlid": 1, 00:10:37.740 "max_cntlid": 65519, 00:10:37.740 "namespaces": [ 00:10:37.740 { 00:10:37.740 "nsid": 1, 00:10:37.740 "bdev_name": "Malloc1", 00:10:37.740 "name": "Malloc1", 00:10:37.740 "nguid": "3FC929C1BA81446ABE66C62CC1A7B978", 00:10:37.740 "uuid": "3fc929c1-ba81-446a-be66-c62cc1a7b978" 00:10:37.740 }, 00:10:37.740 { 00:10:37.740 "nsid": 2, 00:10:37.740 "bdev_name": "Malloc3", 00:10:37.740 "name": "Malloc3", 00:10:37.740 "nguid": "E9E94AB53E3B48CC8057400DA700D55C", 00:10:37.740 "uuid": "e9e94ab5-3e3b-48cc-8057-400da700d55c" 00:10:37.740 } 00:10:37.740 ] 00:10:37.740 }, 00:10:37.740 { 00:10:37.740 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:37.740 "subtype": "NVMe", 00:10:37.740 "listen_addresses": [ 00:10:37.740 { 00:10:37.740 "trtype": "VFIOUSER", 00:10:37.740 "adrfam": "IPv4", 00:10:37.740 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:37.740 "trsvcid": "0" 00:10:37.740 } 00:10:37.740 ], 00:10:37.740 "allow_any_host": true, 00:10:37.740 "hosts": [], 00:10:37.740 "serial_number": "SPDK2", 00:10:37.740 "model_number": "SPDK bdev Controller", 00:10:37.740 "max_namespaces": 32, 00:10:37.740 "min_cntlid": 1, 00:10:37.740 "max_cntlid": 65519, 00:10:37.740 "namespaces": [ 00:10:37.740 { 00:10:37.740 "nsid": 1, 00:10:37.740 "bdev_name": "Malloc2", 00:10:37.740 "name": "Malloc2", 00:10:37.740 "nguid": "8B75FDEE5681472BB069D22194375716", 00:10:37.740 "uuid": "8b75fdee-5681-472b-b069-d22194375716" 00:10:37.740 }, 00:10:37.740 { 00:10:37.740 "nsid": 2, 00:10:37.740 "bdev_name": "Malloc4", 00:10:37.740 "name": "Malloc4", 00:10:37.740 "nguid": "C88D6587619B4815A6066B4E7025DF30", 00:10:37.740 "uuid": "c88d6587-619b-4815-a606-6b4e7025df30" 00:10:37.740 } 00:10:37.740 ] 00:10:37.740 } 00:10:37.740 ] 00:10:37.740 00:57:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1203515 00:10:37.740 00:57:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:10:37.740 00:57:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1197406 00:10:37.740 00:57:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 1197406 ']' 00:10:37.740 00:57:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 1197406 00:10:37.740 00:57:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:10:37.740 00:57:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:37.740 00:57:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1197406 00:10:37.740 00:57:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:37.741 00:57:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:37.741 00:57:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1197406' 00:10:37.741 killing process with pid 1197406 00:10:37.741 00:57:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 1197406 00:10:37.741 [2024-05-15 00:57:50.004698] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:37.741 00:57:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 1197406 00:10:37.999 00:57:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:37.999 00:57:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:37.999 00:57:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:10:37.999 00:57:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:10:37.999 00:57:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:10:37.999 00:57:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1203658 00:10:37.999 00:57:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:10:37.999 00:57:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1203658' 00:10:37.999 Process pid: 1203658 00:10:37.999 00:57:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:37.999 00:57:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1203658 00:10:37.999 00:57:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 1203658 ']' 00:10:37.999 00:57:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.999 00:57:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:37.999 00:57:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.999 00:57:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:37.999 00:57:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:38.258 [2024-05-15 00:57:50.412633] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:10:38.258 [2024-05-15 00:57:50.413618] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:10:38.258 [2024-05-15 00:57:50.413675] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.258 EAL: No free 2048 kB hugepages reported on node 1 00:10:38.258 [2024-05-15 00:57:50.483791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:38.258 [2024-05-15 00:57:50.597716] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:38.258 [2024-05-15 00:57:50.597774] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:38.258 [2024-05-15 00:57:50.597802] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:38.258 [2024-05-15 00:57:50.597813] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:38.258 [2024-05-15 00:57:50.597823] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:38.258 [2024-05-15 00:57:50.597917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.258 [2024-05-15 00:57:50.597981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:38.258 [2024-05-15 00:57:50.598031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:38.258 [2024-05-15 00:57:50.598033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.518 [2024-05-15 00:57:50.705885] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:10:38.518 [2024-05-15 00:57:50.706189] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:10:38.518 [2024-05-15 00:57:50.706435] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:10:38.518 [2024-05-15 00:57:50.707047] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:10:38.518 [2024-05-15 00:57:50.707295] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:10:38.518 00:57:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:38.518 00:57:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:10:38.518 00:57:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:39.455 00:57:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:10:39.715 00:57:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:39.715 00:57:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:39.715 00:57:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:39.715 00:57:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:39.715 00:57:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:39.974 Malloc1 00:10:39.974 00:57:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:40.232 00:57:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:40.490 00:57:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:40.748 [2024-05-15 00:57:53.002663] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:40.748 00:57:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:40.748 00:57:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:40.748 00:57:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:41.006 Malloc2 00:10:41.006 00:57:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:41.266 00:57:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:41.525 00:57:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:41.784 00:57:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:10:41.784 00:57:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1203658 00:10:41.784 00:57:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 1203658 ']' 00:10:41.784 00:57:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 1203658 00:10:41.784 00:57:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:10:41.784 00:57:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:41.784 00:57:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1203658 00:10:41.784 00:57:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:41.784 00:57:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:41.784 00:57:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1203658' 00:10:41.784 killing process with pid 1203658 00:10:41.784 00:57:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 1203658 00:10:41.784 [2024-05-15 00:57:54.155205] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:41.784 00:57:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 1203658 00:10:42.354 00:57:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:42.354 00:57:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:42.354 00:10:42.354 real 0m52.678s 00:10:42.354 user 3m27.754s 00:10:42.354 sys 0m4.488s 00:10:42.354 00:57:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:42.354 00:57:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:42.354 ************************************ 00:10:42.354 END TEST nvmf_vfio_user 00:10:42.354 ************************************ 00:10:42.354 00:57:54 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:42.354 00:57:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:42.354 00:57:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:42.354 00:57:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:42.354 ************************************ 00:10:42.354 START TEST nvmf_vfio_user_nvme_compliance 00:10:42.354 ************************************ 00:10:42.354 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:42.354 * Looking for test storage... 00:10:42.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:10:42.354 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:42.354 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:10:42.354 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:42.354 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:42.354 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:42.354 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:42.354 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:42.354 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:42.354 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:42.354 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:42.354 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:42.354 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:42.354 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:42.354 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:42.354 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:42.354 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:42.354 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:42.354 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:42.354 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:42.354 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.354 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.354 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.354 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.354 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.354 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.354 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:10:42.355 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.355 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:10:42.355 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:42.355 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:42.355 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:42.355 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:42.355 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:42.355 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:42.355 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:42.355 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:42.355 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:42.355 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:42.355 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:10:42.355 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:10:42.355 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:10:42.355 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1204254 00:10:42.355 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:42.355 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1204254' 00:10:42.355 Process pid: 1204254 00:10:42.355 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:42.355 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1204254 00:10:42.355 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 1204254 ']' 00:10:42.355 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.355 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:42.355 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.355 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:42.355 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:42.355 [2024-05-15 00:57:54.636274] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:10:42.355 [2024-05-15 00:57:54.636383] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.355 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.355 [2024-05-15 00:57:54.707515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:42.637 [2024-05-15 00:57:54.821701] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:42.637 [2024-05-15 00:57:54.821768] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:42.637 [2024-05-15 00:57:54.821784] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:42.637 [2024-05-15 00:57:54.821797] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:42.637 [2024-05-15 00:57:54.821809] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:42.637 [2024-05-15 00:57:54.821870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.637 [2024-05-15 00:57:54.821899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:42.637 [2024-05-15 00:57:54.821903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.637 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:42.637 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:10:42.637 00:57:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:10:43.603 00:57:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:43.603 00:57:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:10:43.603 00:57:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:43.603 00:57:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.603 00:57:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:43.603 00:57:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.603 00:57:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:10:43.603 00:57:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:43.603 00:57:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.603 00:57:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:43.863 malloc0 00:10:43.863 00:57:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.863 00:57:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:10:43.863 00:57:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.863 00:57:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:43.863 00:57:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.863 00:57:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:43.863 00:57:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.863 00:57:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:43.863 00:57:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.863 00:57:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:43.863 00:57:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.863 00:57:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:43.863 [2024-05-15 00:57:56.024747] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:43.863 00:57:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.863 00:57:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:10:43.863 EAL: No free 2048 kB hugepages reported on node 1 00:10:43.863 00:10:43.863 00:10:43.863 CUnit - A unit testing framework for C - Version 2.1-3 00:10:43.863 http://cunit.sourceforge.net/ 00:10:43.863 00:10:43.863 00:10:43.863 Suite: nvme_compliance 00:10:43.863 Test: admin_identify_ctrlr_verify_dptr ...[2024-05-15 00:57:56.205568] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:43.863 [2024-05-15 00:57:56.207049] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:10:43.863 [2024-05-15 00:57:56.207074] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:10:43.863 [2024-05-15 00:57:56.207103] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:10:43.863 [2024-05-15 00:57:56.208587] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:43.863 passed 00:10:44.123 Test: admin_identify_ctrlr_verify_fused ...[2024-05-15 00:57:56.294225] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.123 [2024-05-15 00:57:56.297257] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.123 passed 00:10:44.123 Test: admin_identify_ns ...[2024-05-15 00:57:56.385591] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.123 [2024-05-15 00:57:56.444947] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:10:44.123 [2024-05-15 00:57:56.452948] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:10:44.123 [2024-05-15 00:57:56.474072] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.123 passed 00:10:44.386 Test: admin_get_features_mandatory_features ...[2024-05-15 00:57:56.557891] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.386 [2024-05-15 00:57:56.560934] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.386 passed 00:10:44.386 Test: admin_get_features_optional_features ...[2024-05-15 00:57:56.646519] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.386 [2024-05-15 00:57:56.649543] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.386 passed 00:10:44.386 Test: admin_set_features_number_of_queues ...[2024-05-15 00:57:56.730497] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.645 [2024-05-15 00:57:56.839052] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.645 passed 00:10:44.645 Test: admin_get_log_page_mandatory_logs ...[2024-05-15 00:57:56.924169] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.645 [2024-05-15 00:57:56.927195] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.645 passed 00:10:44.645 Test: admin_get_log_page_with_lpo ...[2024-05-15 00:57:57.009422] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.915 [2024-05-15 00:57:57.080962] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:10:44.915 [2024-05-15 00:57:57.094012] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.915 passed 00:10:44.915 Test: fabric_property_get ...[2024-05-15 00:57:57.175781] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.915 [2024-05-15 00:57:57.177069] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:10:44.915 [2024-05-15 00:57:57.178799] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.915 passed 00:10:44.915 Test: admin_delete_io_sq_use_admin_qid ...[2024-05-15 00:57:57.263361] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.915 [2024-05-15 00:57:57.264616] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:10:44.915 [2024-05-15 00:57:57.266381] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:45.240 passed 00:10:45.240 Test: admin_delete_io_sq_delete_sq_twice ...[2024-05-15 00:57:57.347603] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:45.240 [2024-05-15 00:57:57.434941] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:45.240 [2024-05-15 00:57:57.450945] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:45.240 [2024-05-15 00:57:57.456077] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:45.240 passed 00:10:45.240 Test: admin_delete_io_cq_use_admin_qid ...[2024-05-15 00:57:57.537826] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:45.240 [2024-05-15 00:57:57.539112] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:10:45.240 [2024-05-15 00:57:57.540850] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:45.240 passed 00:10:45.500 Test: admin_delete_io_cq_delete_cq_first ...[2024-05-15 00:57:57.622234] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:45.500 [2024-05-15 00:57:57.701942] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:45.500 [2024-05-15 00:57:57.725955] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:45.500 [2024-05-15 00:57:57.731065] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:45.500 passed 00:10:45.500 Test: admin_create_io_cq_verify_iv_pc ...[2024-05-15 00:57:57.811699] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:45.500 [2024-05-15 00:57:57.813029] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:10:45.500 [2024-05-15 00:57:57.813084] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:10:45.500 [2024-05-15 00:57:57.816733] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:45.500 passed 00:10:45.760 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-05-15 00:57:57.899757] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:45.760 [2024-05-15 00:57:57.990937] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:10:45.760 [2024-05-15 00:57:57.998956] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:10:45.760 [2024-05-15 00:57:58.006938] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:10:45.760 [2024-05-15 00:57:58.014946] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:10:45.760 [2024-05-15 00:57:58.044045] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:45.760 passed 00:10:45.760 Test: admin_create_io_sq_verify_pc ...[2024-05-15 00:57:58.127319] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:45.760 [2024-05-15 00:57:58.141955] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:10:46.019 [2024-05-15 00:57:58.159658] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:46.019 passed 00:10:46.019 Test: admin_create_io_qp_max_qps ...[2024-05-15 00:57:58.246273] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:46.957 [2024-05-15 00:57:59.338958] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:10:47.526 [2024-05-15 00:57:59.721178] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:47.526 passed 00:10:47.526 Test: admin_create_io_sq_shared_cq ...[2024-05-15 00:57:59.803629] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:47.785 [2024-05-15 00:57:59.934953] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:47.785 [2024-05-15 00:57:59.972037] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:47.785 passed 00:10:47.785 00:10:47.785 Run Summary: Type Total Ran Passed Failed Inactive 00:10:47.785 suites 1 1 n/a 0 0 00:10:47.785 tests 18 18 18 0 0 00:10:47.785 asserts 360 360 360 0 n/a 00:10:47.785 00:10:47.785 Elapsed time = 1.561 seconds 00:10:47.785 00:58:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1204254 00:10:47.785 00:58:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 1204254 ']' 00:10:47.785 00:58:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 1204254 00:10:47.785 00:58:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:10:47.785 00:58:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:47.785 00:58:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1204254 00:10:47.785 00:58:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:47.785 00:58:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:47.785 00:58:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1204254' 00:10:47.785 killing process with pid 1204254 00:10:47.785 00:58:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 1204254 00:10:47.785 [2024-05-15 00:58:00.049244] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:47.785 00:58:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 1204254 00:10:48.043 00:58:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:10:48.043 00:58:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:48.043 00:10:48.043 real 0m5.805s 00:10:48.043 user 0m16.191s 00:10:48.043 sys 0m0.585s 00:10:48.043 00:58:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:48.043 00:58:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:48.043 ************************************ 00:10:48.043 END TEST nvmf_vfio_user_nvme_compliance 00:10:48.043 ************************************ 00:10:48.043 00:58:00 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:48.043 00:58:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:48.043 00:58:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:48.043 00:58:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:48.043 ************************************ 00:10:48.043 START TEST nvmf_vfio_user_fuzz 00:10:48.043 ************************************ 00:10:48.043 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:48.302 * Looking for test storage... 00:10:48.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1204983 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1204983' 00:10:48.302 Process pid: 1204983 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1204983 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 1204983 ']' 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:48.302 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:48.562 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:48.562 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:10:48.562 00:58:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:10:49.499 00:58:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:49.499 00:58:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.499 00:58:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:49.499 00:58:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.499 00:58:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:10:49.499 00:58:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:49.499 00:58:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.499 00:58:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:49.499 malloc0 00:10:49.499 00:58:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.499 00:58:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:10:49.499 00:58:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.499 00:58:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:49.499 00:58:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.499 00:58:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:49.499 00:58:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.499 00:58:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:49.499 00:58:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.499 00:58:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:49.499 00:58:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.499 00:58:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:49.499 00:58:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.499 00:58:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:10:49.499 00:58:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:11:21.592 Fuzzing completed. Shutting down the fuzz application 00:11:21.592 00:11:21.592 Dumping successful admin opcodes: 00:11:21.592 8, 9, 10, 24, 00:11:21.592 Dumping successful io opcodes: 00:11:21.592 0, 00:11:21.592 NS: 0x200003a1ef00 I/O qp, Total commands completed: 661244, total successful commands: 2580, random_seed: 2994800960 00:11:21.592 NS: 0x200003a1ef00 admin qp, Total commands completed: 84138, total successful commands: 668, random_seed: 4007001664 00:11:21.592 00:58:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:11:21.592 00:58:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.592 00:58:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:21.592 00:58:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.592 00:58:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1204983 00:11:21.592 00:58:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 1204983 ']' 00:11:21.592 00:58:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 1204983 00:11:21.592 00:58:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:11:21.592 00:58:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:21.592 00:58:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1204983 00:11:21.592 00:58:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:21.592 00:58:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:21.592 00:58:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1204983' 00:11:21.592 killing process with pid 1204983 00:11:21.592 00:58:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 1204983 00:11:21.592 00:58:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 1204983 00:11:21.592 00:58:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:11:21.592 00:58:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:11:21.592 00:11:21.592 real 0m32.367s 00:11:21.592 user 0m33.452s 00:11:21.592 sys 0m25.740s 00:11:21.592 00:58:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:21.592 00:58:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:21.592 ************************************ 00:11:21.592 END TEST nvmf_vfio_user_fuzz 00:11:21.592 ************************************ 00:11:21.592 00:58:32 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:21.592 00:58:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:21.592 00:58:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:21.592 00:58:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:21.592 ************************************ 00:11:21.592 START TEST nvmf_host_management 00:11:21.592 ************************************ 00:11:21.592 00:58:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:21.592 * Looking for test storage... 00:11:21.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.592 00:58:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:21.592 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:21.592 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.592 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.592 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.592 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.592 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.592 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.592 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.592 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.592 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.592 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.592 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:21.592 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:21.592 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.592 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.592 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:21.592 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.592 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:21.592 00:58:32 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.592 00:58:32 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.592 00:58:32 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.592 00:58:32 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.592 00:58:32 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.593 00:58:32 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.593 00:58:32 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:21.593 00:58:32 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.593 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:11:21.593 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:21.593 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:21.593 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.593 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.593 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.593 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:21.593 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:21.593 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:21.593 00:58:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:21.593 00:58:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:21.593 00:58:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:21.593 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:21.593 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.593 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:21.593 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:21.593 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:21.593 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.593 00:58:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:21.593 00:58:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.593 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:21.593 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:21.593 00:58:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:11:21.593 00:58:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:22.982 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:22.982 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:22.982 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:22.982 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:22.982 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:22.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:22.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:11:22.983 00:11:22.983 --- 10.0.0.2 ping statistics --- 00:11:22.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.983 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:22.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:22.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:11:22.983 00:11:22.983 --- 10.0.0.1 ping statistics --- 00:11:22.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.983 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1210720 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1210720 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 1210720 ']' 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:22.983 00:58:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:22.983 [2024-05-15 00:58:35.347459] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:11:22.983 [2024-05-15 00:58:35.347552] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.243 EAL: No free 2048 kB hugepages reported on node 1 00:11:23.243 [2024-05-15 00:58:35.424435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:23.243 [2024-05-15 00:58:35.536004] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:23.243 [2024-05-15 00:58:35.536061] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:23.243 [2024-05-15 00:58:35.536090] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:23.243 [2024-05-15 00:58:35.536101] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:23.243 [2024-05-15 00:58:35.536115] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:23.243 [2024-05-15 00:58:35.536212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:23.243 [2024-05-15 00:58:35.536259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:23.243 [2024-05-15 00:58:35.536317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:23.243 [2024-05-15 00:58:35.536320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:24.179 [2024-05-15 00:58:36.333940] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:24.179 Malloc0 00:11:24.179 [2024-05-15 00:58:36.392745] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:24.179 [2024-05-15 00:58:36.393098] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1210896 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1210896 /var/tmp/bdevperf.sock 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 1210896 ']' 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:24.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:24.179 { 00:11:24.179 "params": { 00:11:24.179 "name": "Nvme$subsystem", 00:11:24.179 "trtype": "$TEST_TRANSPORT", 00:11:24.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:24.179 "adrfam": "ipv4", 00:11:24.179 "trsvcid": "$NVMF_PORT", 00:11:24.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:24.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:24.179 "hdgst": ${hdgst:-false}, 00:11:24.179 "ddgst": ${ddgst:-false} 00:11:24.179 }, 00:11:24.179 "method": "bdev_nvme_attach_controller" 00:11:24.179 } 00:11:24.179 EOF 00:11:24.179 )") 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:24.179 00:58:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:24.179 "params": { 00:11:24.179 "name": "Nvme0", 00:11:24.179 "trtype": "tcp", 00:11:24.179 "traddr": "10.0.0.2", 00:11:24.179 "adrfam": "ipv4", 00:11:24.179 "trsvcid": "4420", 00:11:24.179 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:24.179 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:24.179 "hdgst": false, 00:11:24.179 "ddgst": false 00:11:24.179 }, 00:11:24.179 "method": "bdev_nvme_attach_controller" 00:11:24.179 }' 00:11:24.179 [2024-05-15 00:58:36.462955] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:11:24.179 [2024-05-15 00:58:36.463036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1210896 ] 00:11:24.179 EAL: No free 2048 kB hugepages reported on node 1 00:11:24.179 [2024-05-15 00:58:36.535617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.439 [2024-05-15 00:58:36.646202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.699 Running I/O for 10 seconds... 00:11:25.300 00:58:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:25.300 00:58:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:11:25.300 00:58:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:25.300 00:58:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.300 00:58:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:25.300 00:58:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.300 00:58:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:25.300 00:58:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:25.300 00:58:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:25.300 00:58:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:25.300 00:58:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:25.300 00:58:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:25.300 00:58:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:25.300 00:58:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:25.300 00:58:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:25.300 00:58:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:25.300 00:58:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.300 00:58:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:25.300 00:58:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.300 00:58:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=533 00:11:25.300 00:58:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 533 -ge 100 ']' 00:11:25.300 00:58:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:25.300 00:58:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:25.300 00:58:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:25.300 00:58:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:25.300 00:58:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.300 00:58:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:25.300 [2024-05-15 00:58:37.478253] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12379d0 is same with the state(5) to be set 00:11:25.300 [2024-05-15 00:58:37.478319] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12379d0 is same with the state(5) to be set 00:11:25.300 [2024-05-15 00:58:37.478350] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12379d0 is same with the state(5) to be set 00:11:25.300 [2024-05-15 00:58:37.478363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12379d0 is same with the state(5) to be set 00:11:25.300 [2024-05-15 00:58:37.478375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12379d0 is same with the state(5) to be set 00:11:25.300 [2024-05-15 00:58:37.478387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12379d0 is same with the state(5) to be set 00:11:25.300 [2024-05-15 00:58:37.478399] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12379d0 is same with the state(5) to be set 00:11:25.300 [2024-05-15 00:58:37.478661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.478702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.478732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.478748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.478765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.478779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.478794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.478808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.478824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.478838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.478854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.478867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.478883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.478907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.478923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.478946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.478962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.478976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.478994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.479007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.479022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.479035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.479052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.479067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.479084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.479099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.479115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.479129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.479146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.479160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.479176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.479191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.479207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.479221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.479244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.479258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.479275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.479289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.479312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.479327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.479343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.479358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.479374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.479389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.479404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.479419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.479435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.479450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.479465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.479480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.479495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.479510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.479526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.479540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.479556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.479571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.479587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.479601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.479617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.479631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.479647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.479661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.479678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.479696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.479712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.479727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.479743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.479757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.479773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.479787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.479803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.479817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.479833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.479848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.479864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.301 [2024-05-15 00:58:37.479878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.301 [2024-05-15 00:58:37.479894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.302 [2024-05-15 00:58:37.479909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.302 [2024-05-15 00:58:37.479924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.302 [2024-05-15 00:58:37.479945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.302 [2024-05-15 00:58:37.479961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.302 [2024-05-15 00:58:37.479976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.302 [2024-05-15 00:58:37.479993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.302 [2024-05-15 00:58:37.480008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.302 [2024-05-15 00:58:37.480024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.302 [2024-05-15 00:58:37.480038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.302 [2024-05-15 00:58:37.480054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.302 [2024-05-15 00:58:37.480069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.302 [2024-05-15 00:58:37.480090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.302 [2024-05-15 00:58:37.480105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.302 [2024-05-15 00:58:37.480121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.302 [2024-05-15 00:58:37.480135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.302 [2024-05-15 00:58:37.480152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.302 [2024-05-15 00:58:37.480167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.302 [2024-05-15 00:58:37.480182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.302 [2024-05-15 00:58:37.480197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.302 [2024-05-15 00:58:37.480213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.302 [2024-05-15 00:58:37.480227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.302 [2024-05-15 00:58:37.480243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.302 [2024-05-15 00:58:37.480258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.302 [2024-05-15 00:58:37.480274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.302 [2024-05-15 00:58:37.480288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.302 [2024-05-15 00:58:37.480303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.302 [2024-05-15 00:58:37.480318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.302 [2024-05-15 00:58:37.480334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.302 [2024-05-15 00:58:37.480348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.302 [2024-05-15 00:58:37.480364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.302 [2024-05-15 00:58:37.480378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.302 [2024-05-15 00:58:37.480394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.302 [2024-05-15 00:58:37.480408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.302 [2024-05-15 00:58:37.480424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.302 [2024-05-15 00:58:37.480438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.302 [2024-05-15 00:58:37.480454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.302 [2024-05-15 00:58:37.480472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.302 [2024-05-15 00:58:37.480488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.302 [2024-05-15 00:58:37.480503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.302 [2024-05-15 00:58:37.480519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.302 [2024-05-15 00:58:37.480534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.302 [2024-05-15 00:58:37.480550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.302 [2024-05-15 00:58:37.480564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.302 [2024-05-15 00:58:37.480580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.302 [2024-05-15 00:58:37.480594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.302 [2024-05-15 00:58:37.480610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.302 [2024-05-15 00:58:37.480624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.302 [2024-05-15 00:58:37.480640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.302 [2024-05-15 00:58:37.480655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.302 [2024-05-15 00:58:37.480670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:25.302 [2024-05-15 00:58:37.480685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.302 [2024-05-15 00:58:37.480700] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x120bf20 is same with the state(5) to be set 00:11:25.302 [2024-05-15 00:58:37.480778] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x120bf20 was disconnected and freed. reset controller. 00:11:25.302 [2024-05-15 00:58:37.480861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.302 [2024-05-15 00:58:37.480884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.302 [2024-05-15 00:58:37.480900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.302 [2024-05-15 00:58:37.480914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.302 [2024-05-15 00:58:37.480936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.302 [2024-05-15 00:58:37.480953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.302 [2024-05-15 00:58:37.480968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.302 [2024-05-15 00:58:37.480982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.302 [2024-05-15 00:58:37.480995] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdda990 is same with the state(5) to be set 00:11:25.302 00:58:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.302 00:58:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:25.302 00:58:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.302 00:58:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:25.303 [2024-05-15 00:58:37.482126] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:25.303 task offset: 81792 on job bdev=Nvme0n1 fails 00:11:25.303 00:11:25.303 Latency(us) 00:11:25.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:25.303 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:25.303 Job: Nvme0n1 ended in about 0.60 seconds with error 00:11:25.303 Verification LBA range: start 0x0 length 0x400 00:11:25.303 Nvme0n1 : 0.60 963.71 60.23 106.16 0.00 58685.45 6796.33 48156.82 00:11:25.303 =================================================================================================================== 00:11:25.303 Total : 963.71 60.23 106.16 0.00 58685.45 6796.33 48156.82 00:11:25.303 [2024-05-15 00:58:37.484136] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:25.303 [2024-05-15 00:58:37.484165] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdda990 (9): Bad file descriptor 00:11:25.303 00:58:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.303 00:58:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:25.303 [2024-05-15 00:58:37.490601] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:26.239 00:58:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1210896 00:11:26.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1210896) - No such process 00:11:26.239 00:58:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:26.239 00:58:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:26.239 00:58:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:26.239 00:58:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:26.239 00:58:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:26.239 00:58:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:26.239 00:58:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:26.239 00:58:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:26.239 { 00:11:26.239 "params": { 00:11:26.239 "name": "Nvme$subsystem", 00:11:26.239 "trtype": "$TEST_TRANSPORT", 00:11:26.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:26.239 "adrfam": "ipv4", 00:11:26.239 "trsvcid": "$NVMF_PORT", 00:11:26.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:26.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:26.239 "hdgst": ${hdgst:-false}, 00:11:26.239 "ddgst": ${ddgst:-false} 00:11:26.239 }, 00:11:26.239 "method": "bdev_nvme_attach_controller" 00:11:26.239 } 00:11:26.239 EOF 00:11:26.239 )") 00:11:26.239 00:58:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:26.239 00:58:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:26.239 00:58:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:26.239 00:58:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:26.239 "params": { 00:11:26.239 "name": "Nvme0", 00:11:26.239 "trtype": "tcp", 00:11:26.239 "traddr": "10.0.0.2", 00:11:26.239 "adrfam": "ipv4", 00:11:26.239 "trsvcid": "4420", 00:11:26.239 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:26.239 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:26.239 "hdgst": false, 00:11:26.239 "ddgst": false 00:11:26.239 }, 00:11:26.239 "method": "bdev_nvme_attach_controller" 00:11:26.239 }' 00:11:26.239 [2024-05-15 00:58:38.537438] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:11:26.240 [2024-05-15 00:58:38.537530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1211172 ] 00:11:26.240 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.240 [2024-05-15 00:58:38.608474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.500 [2024-05-15 00:58:38.721273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.758 Running I/O for 1 seconds... 00:11:27.695 00:11:27.695 Latency(us) 00:11:27.695 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:27.695 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:27.695 Verification LBA range: start 0x0 length 0x400 00:11:27.695 Nvme0n1 : 1.02 1196.16 74.76 0.00 0.00 52745.78 13301.38 45632.47 00:11:27.695 =================================================================================================================== 00:11:27.695 Total : 1196.16 74.76 0.00 0.00 52745.78 13301.38 45632.47 00:11:27.955 00:58:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:27.955 00:58:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:27.955 00:58:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:27.955 00:58:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:27.955 00:58:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:27.955 00:58:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:27.955 00:58:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:11:27.955 00:58:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:27.955 00:58:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:11:27.955 00:58:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:27.955 00:58:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:27.955 rmmod nvme_tcp 00:11:27.955 rmmod nvme_fabrics 00:11:27.955 rmmod nvme_keyring 00:11:27.955 00:58:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:27.955 00:58:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:11:27.955 00:58:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:11:27.955 00:58:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1210720 ']' 00:11:27.955 00:58:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1210720 00:11:27.955 00:58:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 1210720 ']' 00:11:27.955 00:58:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 1210720 00:11:27.955 00:58:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:11:27.955 00:58:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:27.955 00:58:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1210720 00:11:27.955 00:58:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:11:27.955 00:58:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:11:27.955 00:58:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1210720' 00:11:27.955 killing process with pid 1210720 00:11:27.955 00:58:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 1210720 00:11:27.955 [2024-05-15 00:58:40.295162] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:27.955 00:58:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 1210720 00:11:28.215 [2024-05-15 00:58:40.567016] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:28.215 00:58:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:28.215 00:58:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:28.215 00:58:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:28.215 00:58:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:28.215 00:58:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:28.215 00:58:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.215 00:58:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:28.215 00:58:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.759 00:58:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:30.759 00:58:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:30.759 00:11:30.759 real 0m9.821s 00:11:30.759 user 0m23.511s 00:11:30.759 sys 0m3.077s 00:11:30.759 00:58:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:30.759 00:58:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:30.759 ************************************ 00:11:30.759 END TEST nvmf_host_management 00:11:30.759 ************************************ 00:11:30.759 00:58:42 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:30.759 00:58:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:30.759 00:58:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:30.759 00:58:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:30.759 ************************************ 00:11:30.759 START TEST nvmf_lvol 00:11:30.759 ************************************ 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:30.759 * Looking for test storage... 00:11:30.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:11:30.759 00:58:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:33.296 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:33.296 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.296 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:33.297 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:33.297 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:33.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:33.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:11:33.297 00:11:33.297 --- 10.0.0.2 ping statistics --- 00:11:33.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.297 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:33.297 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:33.297 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:11:33.297 00:11:33.297 --- 10.0.0.1 ping statistics --- 00:11:33.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.297 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1213669 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1213669 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 1213669 ']' 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:33.297 00:58:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:33.297 [2024-05-15 00:58:45.437030] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:11:33.297 [2024-05-15 00:58:45.437125] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.297 EAL: No free 2048 kB hugepages reported on node 1 00:11:33.297 [2024-05-15 00:58:45.519026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:33.297 [2024-05-15 00:58:45.634559] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:33.297 [2024-05-15 00:58:45.634629] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:33.297 [2024-05-15 00:58:45.634645] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:33.297 [2024-05-15 00:58:45.634658] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:33.297 [2024-05-15 00:58:45.634670] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:33.297 [2024-05-15 00:58:45.634781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.297 [2024-05-15 00:58:45.634864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.297 [2024-05-15 00:58:45.634861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.236 00:58:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:34.236 00:58:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:11:34.236 00:58:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:34.236 00:58:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:34.236 00:58:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:34.236 00:58:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.236 00:58:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:34.236 [2024-05-15 00:58:46.625306] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:34.495 00:58:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:34.756 00:58:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:34.756 00:58:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:35.015 00:58:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:35.015 00:58:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:35.274 00:58:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:35.533 00:58:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=28c0458e-61b3-4aea-b897-dbd2bba79989 00:11:35.533 00:58:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 28c0458e-61b3-4aea-b897-dbd2bba79989 lvol 20 00:11:35.792 00:58:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=8405690a-7b93-4a87-8178-83e821c3b001 00:11:35.792 00:58:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:35.792 00:58:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8405690a-7b93-4a87-8178-83e821c3b001 00:11:36.050 00:58:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:36.309 [2024-05-15 00:58:48.649227] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:36.309 [2024-05-15 00:58:48.649526] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:36.309 00:58:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:36.567 00:58:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1214103 00:11:36.567 00:58:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:36.568 00:58:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:36.568 EAL: No free 2048 kB hugepages reported on node 1 00:11:37.943 00:58:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 8405690a-7b93-4a87-8178-83e821c3b001 MY_SNAPSHOT 00:11:37.943 00:58:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=81116cf1-63d0-4304-b66e-4c03ea1b107b 00:11:37.943 00:58:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 8405690a-7b93-4a87-8178-83e821c3b001 30 00:11:38.202 00:58:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 81116cf1-63d0-4304-b66e-4c03ea1b107b MY_CLONE 00:11:38.460 00:58:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f9f43e6a-2399-4bae-af42-b0a403134b1e 00:11:38.460 00:58:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f9f43e6a-2399-4bae-af42-b0a403134b1e 00:11:39.029 00:58:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1214103 00:11:47.149 Initializing NVMe Controllers 00:11:47.149 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:47.149 Controller IO queue size 128, less than required. 00:11:47.149 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:47.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:47.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:47.149 Initialization complete. Launching workers. 00:11:47.149 ======================================================== 00:11:47.149 Latency(us) 00:11:47.149 Device Information : IOPS MiB/s Average min max 00:11:47.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8003.30 31.26 16002.72 567.25 108132.82 00:11:47.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11255.70 43.97 11374.63 2055.89 74941.56 00:11:47.149 ======================================================== 00:11:47.149 Total : 19259.00 75.23 13297.89 567.25 108132.82 00:11:47.149 00:11:47.149 00:58:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:47.424 00:58:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8405690a-7b93-4a87-8178-83e821c3b001 00:11:47.709 00:58:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 28c0458e-61b3-4aea-b897-dbd2bba79989 00:11:47.969 00:59:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:47.969 00:59:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:47.969 00:59:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:47.969 00:59:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:47.969 00:59:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:11:47.969 00:59:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:47.969 00:59:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:11:47.969 00:59:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:47.969 00:59:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:47.969 rmmod nvme_tcp 00:11:47.969 rmmod nvme_fabrics 00:11:47.969 rmmod nvme_keyring 00:11:47.969 00:59:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:47.969 00:59:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:11:47.969 00:59:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:11:47.969 00:59:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1213669 ']' 00:11:47.969 00:59:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1213669 00:11:47.969 00:59:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 1213669 ']' 00:11:47.969 00:59:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 1213669 00:11:47.969 00:59:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:11:47.969 00:59:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:47.969 00:59:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1213669 00:11:47.969 00:59:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:47.969 00:59:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:47.970 00:59:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1213669' 00:11:47.970 killing process with pid 1213669 00:11:47.970 00:59:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 1213669 00:11:47.970 [2024-05-15 00:59:00.248559] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:47.970 00:59:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 1213669 00:11:48.228 00:59:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:48.228 00:59:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:48.228 00:59:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:48.228 00:59:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:48.229 00:59:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:48.229 00:59:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.229 00:59:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:48.229 00:59:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:50.772 00:11:50.772 real 0m19.928s 00:11:50.772 user 1m2.845s 00:11:50.772 sys 0m7.290s 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:50.772 ************************************ 00:11:50.772 END TEST nvmf_lvol 00:11:50.772 ************************************ 00:11:50.772 00:59:02 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:50.772 00:59:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:50.772 00:59:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:50.772 00:59:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:50.772 ************************************ 00:11:50.772 START TEST nvmf_lvs_grow 00:11:50.772 ************************************ 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:50.772 * Looking for test storage... 00:11:50.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:11:50.772 00:59:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:53.310 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:53.310 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:11:53.310 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:53.310 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:53.310 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:53.310 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:53.310 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:53.310 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:11:53.310 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:53.310 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:11:53.310 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:11:53.310 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:11:53.310 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:11:53.310 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:11:53.310 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:11:53.310 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:53.310 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:53.310 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:53.310 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:53.311 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:53.311 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:53.311 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:53.311 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:53.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:11:53.311 00:11:53.311 --- 10.0.0.2 ping statistics --- 00:11:53.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.311 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:53.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:11:53.311 00:11:53.311 --- 10.0.0.1 ping statistics --- 00:11:53.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.311 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1217660 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1217660 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 1217660 ']' 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:53.311 [2024-05-15 00:59:05.394070] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:11:53.311 [2024-05-15 00:59:05.394151] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.311 EAL: No free 2048 kB hugepages reported on node 1 00:11:53.311 [2024-05-15 00:59:05.471333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.311 [2024-05-15 00:59:05.577966] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.311 [2024-05-15 00:59:05.578050] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.311 [2024-05-15 00:59:05.578064] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.311 [2024-05-15 00:59:05.578075] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.311 [2024-05-15 00:59:05.578085] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.311 [2024-05-15 00:59:05.578119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:53.311 00:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:11:53.312 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:53.312 00:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:53.312 00:59:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:53.570 00:59:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.570 00:59:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:53.829 [2024-05-15 00:59:05.992378] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:53.829 00:59:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:53.829 00:59:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:53.829 00:59:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:53.829 00:59:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:53.829 ************************************ 00:11:53.829 START TEST lvs_grow_clean 00:11:53.829 ************************************ 00:11:53.829 00:59:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:11:53.829 00:59:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:53.829 00:59:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:53.829 00:59:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:53.829 00:59:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:53.829 00:59:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:53.829 00:59:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:53.829 00:59:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:53.829 00:59:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:53.829 00:59:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:54.089 00:59:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:54.089 00:59:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:54.349 00:59:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e12fa632-da1a-4a6e-8557-09e09e1ee030 00:11:54.349 00:59:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e12fa632-da1a-4a6e-8557-09e09e1ee030 00:11:54.349 00:59:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:54.609 00:59:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:54.609 00:59:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:54.609 00:59:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e12fa632-da1a-4a6e-8557-09e09e1ee030 lvol 150 00:11:54.869 00:59:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=523703d5-55e8-418e-8d1a-21257f2891d1 00:11:54.869 00:59:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:54.869 00:59:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:55.129 [2024-05-15 00:59:07.294086] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:55.129 [2024-05-15 00:59:07.294170] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:55.129 true 00:11:55.129 00:59:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e12fa632-da1a-4a6e-8557-09e09e1ee030 00:11:55.129 00:59:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:55.388 00:59:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:55.388 00:59:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:55.648 00:59:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 523703d5-55e8-418e-8d1a-21257f2891d1 00:11:55.907 00:59:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:56.166 [2024-05-15 00:59:08.345035] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:56.166 [2024-05-15 00:59:08.345385] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.166 00:59:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:56.425 00:59:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1218095 00:11:56.425 00:59:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:56.425 00:59:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:56.425 00:59:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1218095 /var/tmp/bdevperf.sock 00:11:56.426 00:59:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 1218095 ']' 00:11:56.426 00:59:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:56.426 00:59:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:56.426 00:59:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:56.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:56.426 00:59:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:56.426 00:59:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:56.426 [2024-05-15 00:59:08.686969] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:11:56.426 [2024-05-15 00:59:08.687042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1218095 ] 00:11:56.426 EAL: No free 2048 kB hugepages reported on node 1 00:11:56.426 [2024-05-15 00:59:08.758993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.685 [2024-05-15 00:59:08.879730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.685 00:59:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:56.685 00:59:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:11:56.685 00:59:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:57.254 Nvme0n1 00:11:57.254 00:59:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:57.514 [ 00:11:57.514 { 00:11:57.514 "name": "Nvme0n1", 00:11:57.514 "aliases": [ 00:11:57.514 "523703d5-55e8-418e-8d1a-21257f2891d1" 00:11:57.514 ], 00:11:57.514 "product_name": "NVMe disk", 00:11:57.514 "block_size": 4096, 00:11:57.514 "num_blocks": 38912, 00:11:57.514 "uuid": "523703d5-55e8-418e-8d1a-21257f2891d1", 00:11:57.514 "assigned_rate_limits": { 00:11:57.514 "rw_ios_per_sec": 0, 00:11:57.514 "rw_mbytes_per_sec": 0, 00:11:57.514 "r_mbytes_per_sec": 0, 00:11:57.514 "w_mbytes_per_sec": 0 00:11:57.514 }, 00:11:57.514 "claimed": false, 00:11:57.514 "zoned": false, 00:11:57.514 "supported_io_types": { 00:11:57.514 "read": true, 00:11:57.514 "write": true, 00:11:57.514 "unmap": true, 00:11:57.514 "write_zeroes": true, 00:11:57.514 "flush": true, 00:11:57.514 "reset": true, 00:11:57.514 "compare": true, 00:11:57.514 "compare_and_write": true, 00:11:57.514 "abort": true, 00:11:57.514 "nvme_admin": true, 00:11:57.514 "nvme_io": true 00:11:57.514 }, 00:11:57.514 "memory_domains": [ 00:11:57.514 { 00:11:57.514 "dma_device_id": "system", 00:11:57.514 "dma_device_type": 1 00:11:57.514 } 00:11:57.514 ], 00:11:57.514 "driver_specific": { 00:11:57.514 "nvme": [ 00:11:57.514 { 00:11:57.514 "trid": { 00:11:57.514 "trtype": "TCP", 00:11:57.514 "adrfam": "IPv4", 00:11:57.514 "traddr": "10.0.0.2", 00:11:57.514 "trsvcid": "4420", 00:11:57.514 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:57.514 }, 00:11:57.514 "ctrlr_data": { 00:11:57.514 "cntlid": 1, 00:11:57.514 "vendor_id": "0x8086", 00:11:57.514 "model_number": "SPDK bdev Controller", 00:11:57.514 "serial_number": "SPDK0", 00:11:57.514 "firmware_revision": "24.05", 00:11:57.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:57.514 "oacs": { 00:11:57.514 "security": 0, 00:11:57.514 "format": 0, 00:11:57.514 "firmware": 0, 00:11:57.514 "ns_manage": 0 00:11:57.514 }, 00:11:57.514 "multi_ctrlr": true, 00:11:57.514 "ana_reporting": false 00:11:57.514 }, 00:11:57.514 "vs": { 00:11:57.514 "nvme_version": "1.3" 00:11:57.514 }, 00:11:57.514 "ns_data": { 00:11:57.514 "id": 1, 00:11:57.514 "can_share": true 00:11:57.514 } 00:11:57.514 } 00:11:57.514 ], 00:11:57.514 "mp_policy": "active_passive" 00:11:57.514 } 00:11:57.514 } 00:11:57.514 ] 00:11:57.514 00:59:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1218233 00:11:57.514 00:59:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:57.514 00:59:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:57.514 Running I/O for 10 seconds... 00:11:58.454 Latency(us) 00:11:58.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:58.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:58.454 Nvme0n1 : 1.00 14286.00 55.80 0.00 0.00 0.00 0.00 0.00 00:11:58.454 =================================================================================================================== 00:11:58.454 Total : 14286.00 55.80 0.00 0.00 0.00 0.00 0.00 00:11:58.454 00:11:59.392 00:59:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e12fa632-da1a-4a6e-8557-09e09e1ee030 00:11:59.650 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:59.650 Nvme0n1 : 2.00 14279.50 55.78 0.00 0.00 0.00 0.00 0.00 00:11:59.650 =================================================================================================================== 00:11:59.650 Total : 14279.50 55.78 0.00 0.00 0.00 0.00 0.00 00:11:59.650 00:11:59.650 true 00:11:59.650 00:59:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e12fa632-da1a-4a6e-8557-09e09e1ee030 00:11:59.650 00:59:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:59.908 00:59:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:59.908 00:59:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:59.908 00:59:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1218233 00:12:00.478 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:00.478 Nvme0n1 : 3.00 14319.33 55.93 0.00 0.00 0.00 0.00 0.00 00:12:00.478 =================================================================================================================== 00:12:00.478 Total : 14319.33 55.93 0.00 0.00 0.00 0.00 0.00 00:12:00.478 00:12:01.414 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:01.414 Nvme0n1 : 4.00 14435.75 56.39 0.00 0.00 0.00 0.00 0.00 00:12:01.414 =================================================================================================================== 00:12:01.414 Total : 14435.75 56.39 0.00 0.00 0.00 0.00 0.00 00:12:01.414 00:12:02.796 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:02.796 Nvme0n1 : 5.00 14492.60 56.61 0.00 0.00 0.00 0.00 0.00 00:12:02.796 =================================================================================================================== 00:12:02.796 Total : 14492.60 56.61 0.00 0.00 0.00 0.00 0.00 00:12:02.796 00:12:03.734 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:03.734 Nvme0n1 : 6.00 14551.83 56.84 0.00 0.00 0.00 0.00 0.00 00:12:03.734 =================================================================================================================== 00:12:03.734 Total : 14551.83 56.84 0.00 0.00 0.00 0.00 0.00 00:12:03.734 00:12:04.698 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:04.698 Nvme0n1 : 7.00 14621.43 57.11 0.00 0.00 0.00 0.00 0.00 00:12:04.698 =================================================================================================================== 00:12:04.698 Total : 14621.43 57.11 0.00 0.00 0.00 0.00 0.00 00:12:04.698 00:12:05.636 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:05.636 Nvme0n1 : 8.00 14657.88 57.26 0.00 0.00 0.00 0.00 0.00 00:12:05.636 =================================================================================================================== 00:12:05.636 Total : 14657.88 57.26 0.00 0.00 0.00 0.00 0.00 00:12:05.636 00:12:06.574 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:06.574 Nvme0n1 : 9.00 14686.00 57.37 0.00 0.00 0.00 0.00 0.00 00:12:06.574 =================================================================================================================== 00:12:06.574 Total : 14686.00 57.37 0.00 0.00 0.00 0.00 0.00 00:12:06.574 00:12:07.510 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:07.510 Nvme0n1 : 10.00 14734.30 57.56 0.00 0.00 0.00 0.00 0.00 00:12:07.510 =================================================================================================================== 00:12:07.510 Total : 14734.30 57.56 0.00 0.00 0.00 0.00 0.00 00:12:07.510 00:12:07.510 00:12:07.510 Latency(us) 00:12:07.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:07.510 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:07.510 Nvme0n1 : 10.01 14735.68 57.56 0.00 0.00 8679.98 5752.60 15728.64 00:12:07.510 =================================================================================================================== 00:12:07.510 Total : 14735.68 57.56 0.00 0.00 8679.98 5752.60 15728.64 00:12:07.510 0 00:12:07.510 00:59:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1218095 00:12:07.510 00:59:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 1218095 ']' 00:12:07.510 00:59:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 1218095 00:12:07.510 00:59:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:12:07.510 00:59:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:07.511 00:59:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1218095 00:12:07.511 00:59:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:07.511 00:59:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:07.511 00:59:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1218095' 00:12:07.511 killing process with pid 1218095 00:12:07.511 00:59:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 1218095 00:12:07.511 Received shutdown signal, test time was about 10.000000 seconds 00:12:07.511 00:12:07.511 Latency(us) 00:12:07.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:07.511 =================================================================================================================== 00:12:07.511 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:07.511 00:59:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 1218095 00:12:07.769 00:59:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:08.337 00:59:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:08.337 00:59:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e12fa632-da1a-4a6e-8557-09e09e1ee030 00:12:08.337 00:59:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:08.596 00:59:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:08.596 00:59:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:08.596 00:59:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:08.856 [2024-05-15 00:59:21.212791] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:08.856 00:59:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e12fa632-da1a-4a6e-8557-09e09e1ee030 00:12:08.856 00:59:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:12:08.856 00:59:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e12fa632-da1a-4a6e-8557-09e09e1ee030 00:12:08.856 00:59:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:09.115 00:59:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:09.115 00:59:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:09.115 00:59:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:09.115 00:59:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:09.115 00:59:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:09.115 00:59:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:09.115 00:59:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:09.115 00:59:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e12fa632-da1a-4a6e-8557-09e09e1ee030 00:12:09.115 request: 00:12:09.115 { 00:12:09.115 "uuid": "e12fa632-da1a-4a6e-8557-09e09e1ee030", 00:12:09.115 "method": "bdev_lvol_get_lvstores", 00:12:09.115 "req_id": 1 00:12:09.115 } 00:12:09.115 Got JSON-RPC error response 00:12:09.115 response: 00:12:09.115 { 00:12:09.115 "code": -19, 00:12:09.115 "message": "No such device" 00:12:09.115 } 00:12:09.115 00:59:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:12:09.115 00:59:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:09.115 00:59:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:09.115 00:59:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:09.115 00:59:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:09.683 aio_bdev 00:12:09.683 00:59:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 523703d5-55e8-418e-8d1a-21257f2891d1 00:12:09.683 00:59:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=523703d5-55e8-418e-8d1a-21257f2891d1 00:12:09.683 00:59:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:09.683 00:59:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:12:09.683 00:59:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:09.683 00:59:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:09.683 00:59:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:09.683 00:59:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 523703d5-55e8-418e-8d1a-21257f2891d1 -t 2000 00:12:09.942 [ 00:12:09.942 { 00:12:09.942 "name": "523703d5-55e8-418e-8d1a-21257f2891d1", 00:12:09.942 "aliases": [ 00:12:09.942 "lvs/lvol" 00:12:09.942 ], 00:12:09.942 "product_name": "Logical Volume", 00:12:09.942 "block_size": 4096, 00:12:09.942 "num_blocks": 38912, 00:12:09.942 "uuid": "523703d5-55e8-418e-8d1a-21257f2891d1", 00:12:09.942 "assigned_rate_limits": { 00:12:09.942 "rw_ios_per_sec": 0, 00:12:09.942 "rw_mbytes_per_sec": 0, 00:12:09.942 "r_mbytes_per_sec": 0, 00:12:09.942 "w_mbytes_per_sec": 0 00:12:09.942 }, 00:12:09.942 "claimed": false, 00:12:09.942 "zoned": false, 00:12:09.942 "supported_io_types": { 00:12:09.942 "read": true, 00:12:09.942 "write": true, 00:12:09.942 "unmap": true, 00:12:09.942 "write_zeroes": true, 00:12:09.942 "flush": false, 00:12:09.942 "reset": true, 00:12:09.942 "compare": false, 00:12:09.942 "compare_and_write": false, 00:12:09.942 "abort": false, 00:12:09.942 "nvme_admin": false, 00:12:09.942 "nvme_io": false 00:12:09.942 }, 00:12:09.942 "driver_specific": { 00:12:09.942 "lvol": { 00:12:09.942 "lvol_store_uuid": "e12fa632-da1a-4a6e-8557-09e09e1ee030", 00:12:09.942 "base_bdev": "aio_bdev", 00:12:09.942 "thin_provision": false, 00:12:09.942 "num_allocated_clusters": 38, 00:12:09.942 "snapshot": false, 00:12:09.942 "clone": false, 00:12:09.942 "esnap_clone": false 00:12:09.942 } 00:12:09.942 } 00:12:09.942 } 00:12:09.942 ] 00:12:09.942 00:59:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:12:09.942 00:59:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e12fa632-da1a-4a6e-8557-09e09e1ee030 00:12:09.942 00:59:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:10.200 00:59:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:10.200 00:59:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e12fa632-da1a-4a6e-8557-09e09e1ee030 00:12:10.200 00:59:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:10.459 00:59:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:10.459 00:59:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 523703d5-55e8-418e-8d1a-21257f2891d1 00:12:10.719 00:59:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e12fa632-da1a-4a6e-8557-09e09e1ee030 00:12:10.978 00:59:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:11.234 00:59:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:11.234 00:12:11.234 real 0m17.528s 00:12:11.234 user 0m16.994s 00:12:11.234 sys 0m1.834s 00:12:11.234 00:59:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:11.234 00:59:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:11.234 ************************************ 00:12:11.234 END TEST lvs_grow_clean 00:12:11.234 ************************************ 00:12:11.234 00:59:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:11.234 00:59:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:11.234 00:59:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:11.234 00:59:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:11.234 ************************************ 00:12:11.234 START TEST lvs_grow_dirty 00:12:11.234 ************************************ 00:12:11.234 00:59:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:12:11.234 00:59:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:11.234 00:59:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:11.234 00:59:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:11.234 00:59:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:11.234 00:59:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:11.234 00:59:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:11.234 00:59:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:11.234 00:59:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:11.234 00:59:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:11.493 00:59:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:11.493 00:59:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:11.751 00:59:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6859e047-23e5-4b66-813b-f5cd0b4dd297 00:12:11.751 00:59:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6859e047-23e5-4b66-813b-f5cd0b4dd297 00:12:11.751 00:59:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:12.009 00:59:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:12.009 00:59:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:12.009 00:59:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6859e047-23e5-4b66-813b-f5cd0b4dd297 lvol 150 00:12:12.268 00:59:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=4203806c-4bf0-4450-abc3-a9aef7823f37 00:12:12.268 00:59:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:12.269 00:59:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:12.527 [2024-05-15 00:59:24.867092] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:12.527 [2024-05-15 00:59:24.867168] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:12.527 true 00:12:12.527 00:59:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6859e047-23e5-4b66-813b-f5cd0b4dd297 00:12:12.527 00:59:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:12.787 00:59:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:12.787 00:59:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:13.046 00:59:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4203806c-4bf0-4450-abc3-a9aef7823f37 00:12:13.306 00:59:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:13.564 [2024-05-15 00:59:25.846072] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.564 00:59:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:13.823 00:59:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1220263 00:12:13.823 00:59:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:13.823 00:59:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:13.823 00:59:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1220263 /var/tmp/bdevperf.sock 00:12:13.823 00:59:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 1220263 ']' 00:12:13.823 00:59:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:13.823 00:59:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:13.823 00:59:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:13.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:13.823 00:59:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:13.823 00:59:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:13.823 [2024-05-15 00:59:26.142190] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:12:13.823 [2024-05-15 00:59:26.142274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1220263 ] 00:12:13.823 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.081 [2024-05-15 00:59:26.215298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.082 [2024-05-15 00:59:26.331101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.082 00:59:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:14.082 00:59:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:12:14.082 00:59:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:14.649 Nvme0n1 00:12:14.649 00:59:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:14.649 [ 00:12:14.649 { 00:12:14.649 "name": "Nvme0n1", 00:12:14.649 "aliases": [ 00:12:14.649 "4203806c-4bf0-4450-abc3-a9aef7823f37" 00:12:14.649 ], 00:12:14.649 "product_name": "NVMe disk", 00:12:14.649 "block_size": 4096, 00:12:14.649 "num_blocks": 38912, 00:12:14.649 "uuid": "4203806c-4bf0-4450-abc3-a9aef7823f37", 00:12:14.649 "assigned_rate_limits": { 00:12:14.649 "rw_ios_per_sec": 0, 00:12:14.649 "rw_mbytes_per_sec": 0, 00:12:14.649 "r_mbytes_per_sec": 0, 00:12:14.649 "w_mbytes_per_sec": 0 00:12:14.649 }, 00:12:14.649 "claimed": false, 00:12:14.649 "zoned": false, 00:12:14.649 "supported_io_types": { 00:12:14.649 "read": true, 00:12:14.649 "write": true, 00:12:14.649 "unmap": true, 00:12:14.649 "write_zeroes": true, 00:12:14.649 "flush": true, 00:12:14.649 "reset": true, 00:12:14.649 "compare": true, 00:12:14.649 "compare_and_write": true, 00:12:14.649 "abort": true, 00:12:14.649 "nvme_admin": true, 00:12:14.649 "nvme_io": true 00:12:14.649 }, 00:12:14.649 "memory_domains": [ 00:12:14.649 { 00:12:14.649 "dma_device_id": "system", 00:12:14.649 "dma_device_type": 1 00:12:14.649 } 00:12:14.649 ], 00:12:14.649 "driver_specific": { 00:12:14.649 "nvme": [ 00:12:14.649 { 00:12:14.649 "trid": { 00:12:14.649 "trtype": "TCP", 00:12:14.649 "adrfam": "IPv4", 00:12:14.649 "traddr": "10.0.0.2", 00:12:14.649 "trsvcid": "4420", 00:12:14.649 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:14.649 }, 00:12:14.649 "ctrlr_data": { 00:12:14.649 "cntlid": 1, 00:12:14.649 "vendor_id": "0x8086", 00:12:14.649 "model_number": "SPDK bdev Controller", 00:12:14.649 "serial_number": "SPDK0", 00:12:14.649 "firmware_revision": "24.05", 00:12:14.649 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:14.649 "oacs": { 00:12:14.649 "security": 0, 00:12:14.649 "format": 0, 00:12:14.649 "firmware": 0, 00:12:14.649 "ns_manage": 0 00:12:14.649 }, 00:12:14.649 "multi_ctrlr": true, 00:12:14.649 "ana_reporting": false 00:12:14.649 }, 00:12:14.649 "vs": { 00:12:14.649 "nvme_version": "1.3" 00:12:14.649 }, 00:12:14.649 "ns_data": { 00:12:14.649 "id": 1, 00:12:14.649 "can_share": true 00:12:14.649 } 00:12:14.649 } 00:12:14.649 ], 00:12:14.649 "mp_policy": "active_passive" 00:12:14.649 } 00:12:14.649 } 00:12:14.649 ] 00:12:14.649 00:59:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1220288 00:12:14.649 00:59:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:14.649 00:59:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:14.909 Running I/O for 10 seconds... 00:12:15.846 Latency(us) 00:12:15.846 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:15.846 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:15.846 Nvme0n1 : 1.00 11639.00 45.46 0.00 0.00 0.00 0.00 0.00 00:12:15.846 =================================================================================================================== 00:12:15.846 Total : 11639.00 45.46 0.00 0.00 0.00 0.00 0.00 00:12:15.846 00:12:16.786 00:59:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6859e047-23e5-4b66-813b-f5cd0b4dd297 00:12:16.786 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:16.786 Nvme0n1 : 2.00 11557.50 45.15 0.00 0.00 0.00 0.00 0.00 00:12:16.786 =================================================================================================================== 00:12:16.786 Total : 11557.50 45.15 0.00 0.00 0.00 0.00 0.00 00:12:16.786 00:12:17.045 true 00:12:17.045 00:59:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6859e047-23e5-4b66-813b-f5cd0b4dd297 00:12:17.045 00:59:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:17.304 00:59:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:17.304 00:59:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:17.304 00:59:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1220288 00:12:17.874 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:17.874 Nvme0n1 : 3.00 11521.33 45.01 0.00 0.00 0.00 0.00 0.00 00:12:17.874 =================================================================================================================== 00:12:17.874 Total : 11521.33 45.01 0.00 0.00 0.00 0.00 0.00 00:12:17.874 00:12:18.808 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:18.808 Nvme0n1 : 4.00 11478.25 44.84 0.00 0.00 0.00 0.00 0.00 00:12:18.808 =================================================================================================================== 00:12:18.808 Total : 11478.25 44.84 0.00 0.00 0.00 0.00 0.00 00:12:18.808 00:12:19.778 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:19.778 Nvme0n1 : 5.00 11484.80 44.86 0.00 0.00 0.00 0.00 0.00 00:12:19.778 =================================================================================================================== 00:12:19.778 Total : 11484.80 44.86 0.00 0.00 0.00 0.00 0.00 00:12:19.778 00:12:21.153 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:21.153 Nvme0n1 : 6.00 11515.17 44.98 0.00 0.00 0.00 0.00 0.00 00:12:21.153 =================================================================================================================== 00:12:21.153 Total : 11515.17 44.98 0.00 0.00 0.00 0.00 0.00 00:12:21.153 00:12:22.087 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:22.087 Nvme0n1 : 7.00 11584.57 45.25 0.00 0.00 0.00 0.00 0.00 00:12:22.087 =================================================================================================================== 00:12:22.087 Total : 11584.57 45.25 0.00 0.00 0.00 0.00 0.00 00:12:22.087 00:12:23.021 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:23.021 Nvme0n1 : 8.00 11589.12 45.27 0.00 0.00 0.00 0.00 0.00 00:12:23.021 =================================================================================================================== 00:12:23.021 Total : 11589.12 45.27 0.00 0.00 0.00 0.00 0.00 00:12:23.021 00:12:23.955 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:23.955 Nvme0n1 : 9.00 11592.56 45.28 0.00 0.00 0.00 0.00 0.00 00:12:23.955 =================================================================================================================== 00:12:23.955 Total : 11592.56 45.28 0.00 0.00 0.00 0.00 0.00 00:12:23.955 00:12:24.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:24.890 Nvme0n1 : 10.00 11639.80 45.47 0.00 0.00 0.00 0.00 0.00 00:12:24.890 =================================================================================================================== 00:12:24.890 Total : 11639.80 45.47 0.00 0.00 0.00 0.00 0.00 00:12:24.890 00:12:24.890 00:12:24.890 Latency(us) 00:12:24.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:24.890 Nvme0n1 : 10.01 11640.47 45.47 0.00 0.00 10990.11 3398.16 22330.79 00:12:24.890 =================================================================================================================== 00:12:24.890 Total : 11640.47 45.47 0.00 0.00 10990.11 3398.16 22330.79 00:12:24.890 0 00:12:24.890 00:59:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1220263 00:12:24.890 00:59:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 1220263 ']' 00:12:24.890 00:59:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 1220263 00:12:24.890 00:59:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:12:24.890 00:59:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:24.890 00:59:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1220263 00:12:24.890 00:59:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:24.890 00:59:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:24.890 00:59:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1220263' 00:12:24.890 killing process with pid 1220263 00:12:24.890 00:59:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 1220263 00:12:24.890 Received shutdown signal, test time was about 10.000000 seconds 00:12:24.890 00:12:24.890 Latency(us) 00:12:24.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.890 =================================================================================================================== 00:12:24.890 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:24.890 00:59:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 1220263 00:12:25.149 00:59:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:25.406 00:59:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:25.973 00:59:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6859e047-23e5-4b66-813b-f5cd0b4dd297 00:12:25.973 00:59:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:25.973 00:59:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:25.973 00:59:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:25.973 00:59:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1217660 00:12:25.973 00:59:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1217660 00:12:25.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1217660 Killed "${NVMF_APP[@]}" "$@" 00:12:25.973 00:59:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:25.973 00:59:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:25.973 00:59:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:25.973 00:59:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:25.973 00:59:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:25.973 00:59:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1221628 00:12:25.973 00:59:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:25.973 00:59:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1221628 00:12:25.973 00:59:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 1221628 ']' 00:12:25.973 00:59:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.973 00:59:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:25.973 00:59:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.973 00:59:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:25.973 00:59:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:26.231 [2024-05-15 00:59:38.399062] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:12:26.232 [2024-05-15 00:59:38.399152] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.232 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.232 [2024-05-15 00:59:38.480606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.232 [2024-05-15 00:59:38.590312] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.232 [2024-05-15 00:59:38.590368] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.232 [2024-05-15 00:59:38.590382] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.232 [2024-05-15 00:59:38.590394] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.232 [2024-05-15 00:59:38.590404] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.232 [2024-05-15 00:59:38.590438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.166 00:59:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:27.166 00:59:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:12:27.166 00:59:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:27.166 00:59:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:27.166 00:59:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:27.167 00:59:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.167 00:59:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:27.425 [2024-05-15 00:59:39.628899] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:27.425 [2024-05-15 00:59:39.629060] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:27.425 [2024-05-15 00:59:39.629111] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:27.425 00:59:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:27.425 00:59:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 4203806c-4bf0-4450-abc3-a9aef7823f37 00:12:27.425 00:59:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=4203806c-4bf0-4450-abc3-a9aef7823f37 00:12:27.425 00:59:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:27.425 00:59:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:12:27.425 00:59:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:27.425 00:59:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:27.425 00:59:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:27.684 00:59:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4203806c-4bf0-4450-abc3-a9aef7823f37 -t 2000 00:12:27.942 [ 00:12:27.942 { 00:12:27.942 "name": "4203806c-4bf0-4450-abc3-a9aef7823f37", 00:12:27.942 "aliases": [ 00:12:27.942 "lvs/lvol" 00:12:27.942 ], 00:12:27.942 "product_name": "Logical Volume", 00:12:27.942 "block_size": 4096, 00:12:27.942 "num_blocks": 38912, 00:12:27.942 "uuid": "4203806c-4bf0-4450-abc3-a9aef7823f37", 00:12:27.942 "assigned_rate_limits": { 00:12:27.942 "rw_ios_per_sec": 0, 00:12:27.942 "rw_mbytes_per_sec": 0, 00:12:27.942 "r_mbytes_per_sec": 0, 00:12:27.942 "w_mbytes_per_sec": 0 00:12:27.942 }, 00:12:27.942 "claimed": false, 00:12:27.942 "zoned": false, 00:12:27.942 "supported_io_types": { 00:12:27.942 "read": true, 00:12:27.942 "write": true, 00:12:27.942 "unmap": true, 00:12:27.942 "write_zeroes": true, 00:12:27.942 "flush": false, 00:12:27.942 "reset": true, 00:12:27.942 "compare": false, 00:12:27.942 "compare_and_write": false, 00:12:27.942 "abort": false, 00:12:27.942 "nvme_admin": false, 00:12:27.942 "nvme_io": false 00:12:27.942 }, 00:12:27.942 "driver_specific": { 00:12:27.943 "lvol": { 00:12:27.943 "lvol_store_uuid": "6859e047-23e5-4b66-813b-f5cd0b4dd297", 00:12:27.943 "base_bdev": "aio_bdev", 00:12:27.943 "thin_provision": false, 00:12:27.943 "num_allocated_clusters": 38, 00:12:27.943 "snapshot": false, 00:12:27.943 "clone": false, 00:12:27.943 "esnap_clone": false 00:12:27.943 } 00:12:27.943 } 00:12:27.943 } 00:12:27.943 ] 00:12:27.943 00:59:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:12:27.943 00:59:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6859e047-23e5-4b66-813b-f5cd0b4dd297 00:12:27.943 00:59:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:28.201 00:59:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:28.201 00:59:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6859e047-23e5-4b66-813b-f5cd0b4dd297 00:12:28.201 00:59:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:28.459 00:59:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:28.459 00:59:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:28.717 [2024-05-15 00:59:40.942338] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:28.717 00:59:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6859e047-23e5-4b66-813b-f5cd0b4dd297 00:12:28.717 00:59:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:12:28.717 00:59:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6859e047-23e5-4b66-813b-f5cd0b4dd297 00:12:28.717 00:59:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:28.717 00:59:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:28.717 00:59:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:28.717 00:59:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:28.717 00:59:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:28.717 00:59:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:28.717 00:59:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:28.717 00:59:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:28.717 00:59:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6859e047-23e5-4b66-813b-f5cd0b4dd297 00:12:28.975 request: 00:12:28.975 { 00:12:28.975 "uuid": "6859e047-23e5-4b66-813b-f5cd0b4dd297", 00:12:28.975 "method": "bdev_lvol_get_lvstores", 00:12:28.975 "req_id": 1 00:12:28.975 } 00:12:28.975 Got JSON-RPC error response 00:12:28.975 response: 00:12:28.975 { 00:12:28.975 "code": -19, 00:12:28.975 "message": "No such device" 00:12:28.975 } 00:12:28.975 00:59:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:12:28.975 00:59:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:28.975 00:59:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:28.975 00:59:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:28.975 00:59:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:29.233 aio_bdev 00:12:29.233 00:59:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4203806c-4bf0-4450-abc3-a9aef7823f37 00:12:29.233 00:59:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=4203806c-4bf0-4450-abc3-a9aef7823f37 00:12:29.233 00:59:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:29.233 00:59:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:12:29.233 00:59:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:29.233 00:59:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:29.233 00:59:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:29.491 00:59:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4203806c-4bf0-4450-abc3-a9aef7823f37 -t 2000 00:12:29.749 [ 00:12:29.749 { 00:12:29.749 "name": "4203806c-4bf0-4450-abc3-a9aef7823f37", 00:12:29.749 "aliases": [ 00:12:29.749 "lvs/lvol" 00:12:29.749 ], 00:12:29.749 "product_name": "Logical Volume", 00:12:29.749 "block_size": 4096, 00:12:29.749 "num_blocks": 38912, 00:12:29.749 "uuid": "4203806c-4bf0-4450-abc3-a9aef7823f37", 00:12:29.749 "assigned_rate_limits": { 00:12:29.749 "rw_ios_per_sec": 0, 00:12:29.749 "rw_mbytes_per_sec": 0, 00:12:29.749 "r_mbytes_per_sec": 0, 00:12:29.749 "w_mbytes_per_sec": 0 00:12:29.749 }, 00:12:29.749 "claimed": false, 00:12:29.749 "zoned": false, 00:12:29.749 "supported_io_types": { 00:12:29.749 "read": true, 00:12:29.749 "write": true, 00:12:29.749 "unmap": true, 00:12:29.749 "write_zeroes": true, 00:12:29.749 "flush": false, 00:12:29.749 "reset": true, 00:12:29.749 "compare": false, 00:12:29.749 "compare_and_write": false, 00:12:29.749 "abort": false, 00:12:29.749 "nvme_admin": false, 00:12:29.749 "nvme_io": false 00:12:29.749 }, 00:12:29.749 "driver_specific": { 00:12:29.749 "lvol": { 00:12:29.749 "lvol_store_uuid": "6859e047-23e5-4b66-813b-f5cd0b4dd297", 00:12:29.749 "base_bdev": "aio_bdev", 00:12:29.749 "thin_provision": false, 00:12:29.749 "num_allocated_clusters": 38, 00:12:29.749 "snapshot": false, 00:12:29.749 "clone": false, 00:12:29.749 "esnap_clone": false 00:12:29.749 } 00:12:29.749 } 00:12:29.749 } 00:12:29.749 ] 00:12:29.749 00:59:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:12:29.749 00:59:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6859e047-23e5-4b66-813b-f5cd0b4dd297 00:12:29.749 00:59:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:30.008 00:59:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:30.008 00:59:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6859e047-23e5-4b66-813b-f5cd0b4dd297 00:12:30.008 00:59:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:30.266 00:59:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:30.266 00:59:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4203806c-4bf0-4450-abc3-a9aef7823f37 00:12:30.523 00:59:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6859e047-23e5-4b66-813b-f5cd0b4dd297 00:12:30.781 00:59:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:31.039 00:59:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:31.039 00:12:31.039 real 0m19.716s 00:12:31.039 user 0m45.096s 00:12:31.039 sys 0m5.948s 00:12:31.039 00:59:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:31.039 00:59:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:31.039 ************************************ 00:12:31.039 END TEST lvs_grow_dirty 00:12:31.039 ************************************ 00:12:31.039 00:59:43 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:31.039 00:59:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:12:31.039 00:59:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:12:31.039 00:59:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:12:31.039 00:59:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:31.039 00:59:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:12:31.039 00:59:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:12:31.039 00:59:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:12:31.039 00:59:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:31.039 nvmf_trace.0 00:12:31.039 00:59:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:12:31.039 00:59:43 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:31.039 00:59:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:31.039 00:59:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:12:31.039 00:59:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:31.039 00:59:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:12:31.039 00:59:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:31.039 00:59:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:31.039 rmmod nvme_tcp 00:12:31.039 rmmod nvme_fabrics 00:12:31.296 rmmod nvme_keyring 00:12:31.296 00:59:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:31.296 00:59:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:12:31.296 00:59:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:12:31.296 00:59:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1221628 ']' 00:12:31.296 00:59:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1221628 00:12:31.296 00:59:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 1221628 ']' 00:12:31.296 00:59:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 1221628 00:12:31.296 00:59:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:12:31.296 00:59:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:31.296 00:59:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1221628 00:12:31.296 00:59:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:31.296 00:59:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:31.296 00:59:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1221628' 00:12:31.296 killing process with pid 1221628 00:12:31.296 00:59:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 1221628 00:12:31.296 00:59:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 1221628 00:12:31.552 00:59:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:31.552 00:59:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:31.552 00:59:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:31.552 00:59:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:31.552 00:59:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:31.552 00:59:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.552 00:59:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.552 00:59:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.457 00:59:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:33.457 00:12:33.457 real 0m43.146s 00:12:33.457 user 1m8.731s 00:12:33.457 sys 0m9.988s 00:12:33.457 00:59:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:33.457 00:59:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:33.457 ************************************ 00:12:33.457 END TEST nvmf_lvs_grow 00:12:33.457 ************************************ 00:12:33.716 00:59:45 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:33.716 00:59:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:33.716 00:59:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:33.716 00:59:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:33.716 ************************************ 00:12:33.716 START TEST nvmf_bdev_io_wait 00:12:33.716 ************************************ 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:33.716 * Looking for test storage... 00:12:33.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:12:33.716 00:59:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:36.276 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:36.276 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:36.276 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:36.276 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:36.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:36.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:12:36.276 00:12:36.276 --- 10.0.0.2 ping statistics --- 00:12:36.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.276 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:36.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:36.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:12:36.276 00:12:36.276 --- 10.0.0.1 ping statistics --- 00:12:36.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.276 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:36.276 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1224584 00:12:36.277 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:36.277 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1224584 00:12:36.277 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 1224584 ']' 00:12:36.277 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.277 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:36.277 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.277 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:36.277 00:59:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:36.277 [2024-05-15 00:59:48.595673] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:12:36.277 [2024-05-15 00:59:48.595766] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.277 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.537 [2024-05-15 00:59:48.682337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:36.537 [2024-05-15 00:59:48.801047] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.537 [2024-05-15 00:59:48.801103] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.537 [2024-05-15 00:59:48.801129] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:36.537 [2024-05-15 00:59:48.801142] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:36.537 [2024-05-15 00:59:48.801154] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.537 [2024-05-15 00:59:48.801250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.537 [2024-05-15 00:59:48.801332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.537 [2024-05-15 00:59:48.801422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:36.537 [2024-05-15 00:59:48.801425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.474 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:37.474 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:12:37.474 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:37.474 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:37.474 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:37.474 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.474 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:37.474 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.474 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:37.474 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.474 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:37.474 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.474 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:37.474 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.474 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:37.474 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.474 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:37.475 [2024-05-15 00:59:49.682014] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:37.475 Malloc0 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:37.475 [2024-05-15 00:59:49.741207] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:37.475 [2024-05-15 00:59:49.741525] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1224835 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1224839 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:37.475 { 00:12:37.475 "params": { 00:12:37.475 "name": "Nvme$subsystem", 00:12:37.475 "trtype": "$TEST_TRANSPORT", 00:12:37.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:37.475 "adrfam": "ipv4", 00:12:37.475 "trsvcid": "$NVMF_PORT", 00:12:37.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:37.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:37.475 "hdgst": ${hdgst:-false}, 00:12:37.475 "ddgst": ${ddgst:-false} 00:12:37.475 }, 00:12:37.475 "method": "bdev_nvme_attach_controller" 00:12:37.475 } 00:12:37.475 EOF 00:12:37.475 )") 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1224842 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:37.475 { 00:12:37.475 "params": { 00:12:37.475 "name": "Nvme$subsystem", 00:12:37.475 "trtype": "$TEST_TRANSPORT", 00:12:37.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:37.475 "adrfam": "ipv4", 00:12:37.475 "trsvcid": "$NVMF_PORT", 00:12:37.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:37.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:37.475 "hdgst": ${hdgst:-false}, 00:12:37.475 "ddgst": ${ddgst:-false} 00:12:37.475 }, 00:12:37.475 "method": "bdev_nvme_attach_controller" 00:12:37.475 } 00:12:37.475 EOF 00:12:37.475 )") 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1224845 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:37.475 { 00:12:37.475 "params": { 00:12:37.475 "name": "Nvme$subsystem", 00:12:37.475 "trtype": "$TEST_TRANSPORT", 00:12:37.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:37.475 "adrfam": "ipv4", 00:12:37.475 "trsvcid": "$NVMF_PORT", 00:12:37.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:37.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:37.475 "hdgst": ${hdgst:-false}, 00:12:37.475 "ddgst": ${ddgst:-false} 00:12:37.475 }, 00:12:37.475 "method": "bdev_nvme_attach_controller" 00:12:37.475 } 00:12:37.475 EOF 00:12:37.475 )") 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:37.475 { 00:12:37.475 "params": { 00:12:37.475 "name": "Nvme$subsystem", 00:12:37.475 "trtype": "$TEST_TRANSPORT", 00:12:37.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:37.475 "adrfam": "ipv4", 00:12:37.475 "trsvcid": "$NVMF_PORT", 00:12:37.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:37.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:37.475 "hdgst": ${hdgst:-false}, 00:12:37.475 "ddgst": ${ddgst:-false} 00:12:37.475 }, 00:12:37.475 "method": "bdev_nvme_attach_controller" 00:12:37.475 } 00:12:37.475 EOF 00:12:37.475 )") 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1224835 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:37.475 "params": { 00:12:37.475 "name": "Nvme1", 00:12:37.475 "trtype": "tcp", 00:12:37.475 "traddr": "10.0.0.2", 00:12:37.475 "adrfam": "ipv4", 00:12:37.475 "trsvcid": "4420", 00:12:37.475 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:37.475 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:37.475 "hdgst": false, 00:12:37.475 "ddgst": false 00:12:37.475 }, 00:12:37.475 "method": "bdev_nvme_attach_controller" 00:12:37.475 }' 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:37.475 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:37.475 "params": { 00:12:37.475 "name": "Nvme1", 00:12:37.475 "trtype": "tcp", 00:12:37.475 "traddr": "10.0.0.2", 00:12:37.476 "adrfam": "ipv4", 00:12:37.476 "trsvcid": "4420", 00:12:37.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:37.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:37.476 "hdgst": false, 00:12:37.476 "ddgst": false 00:12:37.476 }, 00:12:37.476 "method": "bdev_nvme_attach_controller" 00:12:37.476 }' 00:12:37.476 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:37.476 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:37.476 "params": { 00:12:37.476 "name": "Nvme1", 00:12:37.476 "trtype": "tcp", 00:12:37.476 "traddr": "10.0.0.2", 00:12:37.476 "adrfam": "ipv4", 00:12:37.476 "trsvcid": "4420", 00:12:37.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:37.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:37.476 "hdgst": false, 00:12:37.476 "ddgst": false 00:12:37.476 }, 00:12:37.476 "method": "bdev_nvme_attach_controller" 00:12:37.476 }' 00:12:37.476 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:37.476 00:59:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:37.476 "params": { 00:12:37.476 "name": "Nvme1", 00:12:37.476 "trtype": "tcp", 00:12:37.476 "traddr": "10.0.0.2", 00:12:37.476 "adrfam": "ipv4", 00:12:37.476 "trsvcid": "4420", 00:12:37.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:37.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:37.476 "hdgst": false, 00:12:37.476 "ddgst": false 00:12:37.476 }, 00:12:37.476 "method": "bdev_nvme_attach_controller" 00:12:37.476 }' 00:12:37.476 [2024-05-15 00:59:49.785607] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:12:37.476 [2024-05-15 00:59:49.785606] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:12:37.476 [2024-05-15 00:59:49.785607] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:12:37.476 [2024-05-15 00:59:49.785696] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-15 00:59:49.785696] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-15 00:59:49.785696] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:37.476 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:37.476 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:37.476 [2024-05-15 00:59:49.785855] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:12:37.476 [2024-05-15 00:59:49.785919] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:37.476 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.744 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.744 [2024-05-15 00:59:49.969280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.744 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.744 [2024-05-15 00:59:50.068771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.744 [2024-05-15 00:59:50.068783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:38.010 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.010 [2024-05-15 00:59:50.167769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:12:38.010 [2024-05-15 00:59:50.170019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.010 [2024-05-15 00:59:50.270128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:12:38.010 [2024-05-15 00:59:50.273123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.010 [2024-05-15 00:59:50.376464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:12:38.269 Running I/O for 1 seconds... 00:12:38.269 Running I/O for 1 seconds... 00:12:38.269 Running I/O for 1 seconds... 00:12:38.269 Running I/O for 1 seconds... 00:12:39.207 00:12:39.207 Latency(us) 00:12:39.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:39.207 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:39.207 Nvme1n1 : 1.00 194646.02 760.34 0.00 0.00 655.03 274.58 879.88 00:12:39.207 =================================================================================================================== 00:12:39.207 Total : 194646.02 760.34 0.00 0.00 655.03 274.58 879.88 00:12:39.207 00:12:39.207 Latency(us) 00:12:39.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:39.207 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:39.207 Nvme1n1 : 1.02 6893.09 26.93 0.00 0.00 18434.58 7524.50 31457.28 00:12:39.207 =================================================================================================================== 00:12:39.207 Total : 6893.09 26.93 0.00 0.00 18434.58 7524.50 31457.28 00:12:39.207 00:12:39.207 Latency(us) 00:12:39.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:39.207 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:39.207 Nvme1n1 : 1.01 8640.27 33.75 0.00 0.00 14722.01 3106.89 45438.29 00:12:39.207 =================================================================================================================== 00:12:39.207 Total : 8640.27 33.75 0.00 0.00 14722.01 3106.89 45438.29 00:12:39.467 00:12:39.468 Latency(us) 00:12:39.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:39.468 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:39.468 Nvme1n1 : 1.01 6744.78 26.35 0.00 0.00 18905.17 3422.44 43108.12 00:12:39.468 =================================================================================================================== 00:12:39.468 Total : 6744.78 26.35 0.00 0.00 18905.17 3422.44 43108.12 00:12:39.728 00:59:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1224839 00:12:39.728 00:59:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1224842 00:12:39.728 00:59:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1224845 00:12:39.728 00:59:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:39.728 00:59:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.728 00:59:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:39.728 00:59:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.728 00:59:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:39.728 00:59:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:39.728 00:59:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:39.728 00:59:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:12:39.728 00:59:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:39.728 00:59:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:12:39.728 00:59:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:39.728 00:59:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:39.728 rmmod nvme_tcp 00:12:39.728 rmmod nvme_fabrics 00:12:39.728 rmmod nvme_keyring 00:12:39.728 00:59:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:39.728 00:59:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:12:39.728 00:59:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:12:39.728 00:59:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1224584 ']' 00:12:39.728 00:59:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1224584 00:12:39.728 00:59:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 1224584 ']' 00:12:39.728 00:59:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 1224584 00:12:39.728 00:59:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:12:39.728 00:59:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:39.728 00:59:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1224584 00:12:39.728 00:59:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:39.728 00:59:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:39.728 00:59:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1224584' 00:12:39.728 killing process with pid 1224584 00:12:39.728 00:59:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 1224584 00:12:39.728 [2024-05-15 00:59:52.053142] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:39.728 00:59:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 1224584 00:12:39.988 00:59:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:39.988 00:59:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:39.988 00:59:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:39.988 00:59:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:39.988 00:59:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:39.988 00:59:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.988 00:59:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:39.988 00:59:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.523 00:59:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:42.523 00:12:42.523 real 0m8.486s 00:12:42.523 user 0m19.647s 00:12:42.523 sys 0m3.869s 00:12:42.524 00:59:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:42.524 00:59:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:42.524 ************************************ 00:12:42.524 END TEST nvmf_bdev_io_wait 00:12:42.524 ************************************ 00:12:42.524 00:59:54 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:42.524 00:59:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:42.524 00:59:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:42.524 00:59:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:42.524 ************************************ 00:12:42.524 START TEST nvmf_queue_depth 00:12:42.524 ************************************ 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:42.524 * Looking for test storage... 00:12:42.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:12:42.524 00:59:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:44.431 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:44.431 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:44.431 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:44.431 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:44.431 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:44.432 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:44.432 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:44.432 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:44.432 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:44.432 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:44.432 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:44.432 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:44.432 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:44.432 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:44.432 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:44.432 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:44.690 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:44.690 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:44.690 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:44.690 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:44.690 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:44.690 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:44.691 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:44.691 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:44.691 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:44.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:44.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:12:44.691 00:12:44.691 --- 10.0.0.2 ping statistics --- 00:12:44.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.691 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:12:44.691 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:44.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:44.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:12:44.691 00:12:44.691 --- 10.0.0.1 ping statistics --- 00:12:44.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.691 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:12:44.691 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:44.691 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:12:44.691 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:44.691 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:44.691 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:44.691 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:44.691 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:44.691 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:44.691 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:44.691 00:59:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:44.691 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:44.691 00:59:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:44.691 00:59:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:44.691 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1227377 00:12:44.691 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:44.691 00:59:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1227377 00:12:44.691 00:59:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 1227377 ']' 00:12:44.691 00:59:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.691 00:59:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:44.691 00:59:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.691 00:59:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:44.691 00:59:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:44.691 [2024-05-15 00:59:56.997978] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:12:44.691 [2024-05-15 00:59:56.998062] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.691 EAL: No free 2048 kB hugepages reported on node 1 00:12:44.691 [2024-05-15 00:59:57.073075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.949 [2024-05-15 00:59:57.179411] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.950 [2024-05-15 00:59:57.179466] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.950 [2024-05-15 00:59:57.179490] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:44.950 [2024-05-15 00:59:57.179501] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:44.950 [2024-05-15 00:59:57.179510] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.950 [2024-05-15 00:59:57.179536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.950 00:59:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:44.950 00:59:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:12:44.950 00:59:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:44.950 00:59:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:44.950 00:59:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:44.950 00:59:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.950 00:59:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:44.950 00:59:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.950 00:59:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:44.950 [2024-05-15 00:59:57.327686] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:44.950 00:59:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.950 00:59:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:44.950 00:59:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.950 00:59:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:45.208 Malloc0 00:12:45.208 00:59:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.208 00:59:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:45.208 00:59:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.208 00:59:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:45.208 00:59:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.208 00:59:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:45.208 00:59:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.208 00:59:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:45.208 00:59:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.208 00:59:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.208 00:59:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.208 00:59:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:45.208 [2024-05-15 00:59:57.394336] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:45.208 [2024-05-15 00:59:57.394645] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.208 00:59:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.208 00:59:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1227397 00:12:45.208 00:59:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:45.208 00:59:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:45.208 00:59:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1227397 /var/tmp/bdevperf.sock 00:12:45.208 00:59:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 1227397 ']' 00:12:45.208 00:59:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:45.208 00:59:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:45.208 00:59:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:45.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:45.208 00:59:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:45.208 00:59:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:45.208 [2024-05-15 00:59:57.439837] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:12:45.208 [2024-05-15 00:59:57.439900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1227397 ] 00:12:45.208 EAL: No free 2048 kB hugepages reported on node 1 00:12:45.208 [2024-05-15 00:59:57.513580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.465 [2024-05-15 00:59:57.630369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.033 00:59:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:46.033 00:59:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:12:46.033 00:59:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:46.033 00:59:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.033 00:59:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:46.293 NVMe0n1 00:12:46.293 00:59:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.293 00:59:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:46.293 Running I/O for 10 seconds... 00:12:56.269 00:12:56.269 Latency(us) 00:12:56.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.269 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:56.269 Verification LBA range: start 0x0 length 0x4000 00:12:56.269 NVMe0n1 : 10.06 8459.66 33.05 0.00 0.00 120504.84 9320.68 78060.66 00:12:56.269 =================================================================================================================== 00:12:56.269 Total : 8459.66 33.05 0.00 0.00 120504.84 9320.68 78060.66 00:12:56.269 0 00:12:56.529 01:00:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1227397 00:12:56.529 01:00:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 1227397 ']' 00:12:56.529 01:00:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 1227397 00:12:56.529 01:00:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:12:56.529 01:00:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:56.529 01:00:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1227397 00:12:56.529 01:00:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:56.529 01:00:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:56.529 01:00:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1227397' 00:12:56.529 killing process with pid 1227397 00:12:56.529 01:00:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 1227397 00:12:56.529 Received shutdown signal, test time was about 10.000000 seconds 00:12:56.529 00:12:56.529 Latency(us) 00:12:56.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.530 =================================================================================================================== 00:12:56.530 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:56.530 01:00:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 1227397 00:12:56.790 01:00:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:56.790 01:00:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:56.790 01:00:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:56.790 01:00:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:12:56.790 01:00:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:56.790 01:00:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:12:56.790 01:00:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:56.790 01:00:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:56.790 rmmod nvme_tcp 00:12:56.790 rmmod nvme_fabrics 00:12:56.790 rmmod nvme_keyring 00:12:56.790 01:00:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:56.790 01:00:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:12:56.790 01:00:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:12:56.790 01:00:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1227377 ']' 00:12:56.790 01:00:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1227377 00:12:56.790 01:00:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 1227377 ']' 00:12:56.790 01:00:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 1227377 00:12:56.790 01:00:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:12:56.790 01:00:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:56.790 01:00:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1227377 00:12:56.790 01:00:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:56.790 01:00:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:56.790 01:00:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1227377' 00:12:56.790 killing process with pid 1227377 00:12:56.790 01:00:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 1227377 00:12:56.790 [2024-05-15 01:00:09.035031] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:56.790 01:00:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 1227377 00:12:57.047 01:00:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:57.047 01:00:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:57.047 01:00:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:57.047 01:00:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:57.048 01:00:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:57.048 01:00:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.048 01:00:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:57.048 01:00:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.596 01:00:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:59.596 00:12:59.596 real 0m16.947s 00:12:59.596 user 0m23.989s 00:12:59.596 sys 0m3.313s 00:12:59.596 01:00:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:59.596 01:00:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:59.596 ************************************ 00:12:59.596 END TEST nvmf_queue_depth 00:12:59.596 ************************************ 00:12:59.596 01:00:11 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:59.596 01:00:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:59.596 01:00:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:59.596 01:00:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:59.596 ************************************ 00:12:59.596 START TEST nvmf_target_multipath 00:12:59.596 ************************************ 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:59.596 * Looking for test storage... 00:12:59.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:12:59.596 01:00:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:02.132 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:02.132 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:13:02.132 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:02.132 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:02.132 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:02.132 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:02.132 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:02.132 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:13:02.132 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:02.132 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:13:02.132 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:13:02.132 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:13:02.132 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:13:02.132 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:13:02.132 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:13:02.132 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:02.132 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:02.132 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:02.132 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:02.132 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:02.132 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:02.132 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:02.133 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:02.133 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:02.133 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:02.133 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:02.133 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:02.133 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:02.133 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:02.133 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:02.133 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:02.133 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:02.133 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:02.133 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:02.133 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:02.133 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:02.133 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:02.133 01:00:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:02.133 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:02.133 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:02.133 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:02.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.133 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:13:02.133 00:13:02.133 --- 10.0.0.2 ping statistics --- 00:13:02.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.133 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:02.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:13:02.133 00:13:02.133 --- 10.0.0.1 ping statistics --- 00:13:02.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.133 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:13:02.133 only one NIC for nvmf test 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:02.133 rmmod nvme_tcp 00:13:02.133 rmmod nvme_fabrics 00:13:02.133 rmmod nvme_keyring 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:02.133 01:00:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.042 01:00:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:04.042 01:00:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:13:04.042 01:00:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:13:04.042 01:00:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:04.042 01:00:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:13:04.042 01:00:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:04.042 01:00:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:13:04.042 01:00:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:04.042 01:00:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:04.042 01:00:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:04.042 01:00:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:13:04.042 01:00:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:13:04.042 01:00:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:04.042 01:00:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:04.042 01:00:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:04.042 01:00:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:04.042 01:00:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:04.042 01:00:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:04.042 01:00:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.042 01:00:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:04.042 01:00:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.042 01:00:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:04.042 00:13:04.042 real 0m4.851s 00:13:04.042 user 0m1.005s 00:13:04.042 sys 0m1.859s 00:13:04.042 01:00:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:04.042 01:00:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:04.042 ************************************ 00:13:04.042 END TEST nvmf_target_multipath 00:13:04.042 ************************************ 00:13:04.042 01:00:16 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:04.042 01:00:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:04.042 01:00:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:04.042 01:00:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:04.042 ************************************ 00:13:04.042 START TEST nvmf_zcopy 00:13:04.042 ************************************ 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:04.042 * Looking for test storage... 00:13:04.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:04.042 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:04.043 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.043 01:00:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:04.043 01:00:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.043 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:04.043 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:04.043 01:00:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:13:04.043 01:00:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:06.595 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:06.595 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:13:06.595 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:06.595 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:06.596 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:06.596 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:06.596 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:06.596 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:06.596 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:06.596 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:13:06.596 00:13:06.596 --- 10.0.0.2 ping statistics --- 00:13:06.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.596 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:06.596 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:06.596 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:13:06.596 00:13:06.596 --- 10.0.0.1 ping statistics --- 00:13:06.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.596 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1233901 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1233901 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 1233901 ']' 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:06.596 01:00:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:06.596 [2024-05-15 01:00:18.933228] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:13:06.596 [2024-05-15 01:00:18.933329] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.596 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.855 [2024-05-15 01:00:19.009663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.855 [2024-05-15 01:00:19.115416] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.855 [2024-05-15 01:00:19.115464] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.855 [2024-05-15 01:00:19.115487] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:06.855 [2024-05-15 01:00:19.115498] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:06.855 [2024-05-15 01:00:19.115507] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.855 [2024-05-15 01:00:19.115532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.800 01:00:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:07.800 01:00:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:13:07.800 01:00:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:07.800 01:00:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:07.800 01:00:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:07.800 01:00:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.800 01:00:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:07.800 01:00:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:07.800 01:00:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.800 01:00:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:07.800 [2024-05-15 01:00:19.948677] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:07.800 01:00:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.800 01:00:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:07.800 01:00:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.800 01:00:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:07.800 01:00:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.800 01:00:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.800 01:00:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.800 01:00:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:07.800 [2024-05-15 01:00:19.964645] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:07.800 [2024-05-15 01:00:19.964904] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.800 01:00:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.800 01:00:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:07.800 01:00:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.800 01:00:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:07.800 01:00:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.800 01:00:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:07.800 01:00:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.800 01:00:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:07.800 malloc0 00:13:07.800 01:00:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.800 01:00:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:07.800 01:00:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.800 01:00:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:07.800 01:00:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.800 01:00:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:07.800 01:00:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:07.800 01:00:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:07.800 01:00:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:07.800 01:00:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:07.800 01:00:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:07.800 { 00:13:07.800 "params": { 00:13:07.800 "name": "Nvme$subsystem", 00:13:07.800 "trtype": "$TEST_TRANSPORT", 00:13:07.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:07.800 "adrfam": "ipv4", 00:13:07.800 "trsvcid": "$NVMF_PORT", 00:13:07.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:07.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:07.800 "hdgst": ${hdgst:-false}, 00:13:07.800 "ddgst": ${ddgst:-false} 00:13:07.800 }, 00:13:07.800 "method": "bdev_nvme_attach_controller" 00:13:07.800 } 00:13:07.800 EOF 00:13:07.800 )") 00:13:07.800 01:00:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:07.800 01:00:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:07.800 01:00:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:07.800 01:00:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:07.800 "params": { 00:13:07.800 "name": "Nvme1", 00:13:07.800 "trtype": "tcp", 00:13:07.800 "traddr": "10.0.0.2", 00:13:07.800 "adrfam": "ipv4", 00:13:07.800 "trsvcid": "4420", 00:13:07.800 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:07.800 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:07.800 "hdgst": false, 00:13:07.800 "ddgst": false 00:13:07.800 }, 00:13:07.800 "method": "bdev_nvme_attach_controller" 00:13:07.800 }' 00:13:07.800 [2024-05-15 01:00:20.047420] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:13:07.800 [2024-05-15 01:00:20.047517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1234056 ] 00:13:07.800 EAL: No free 2048 kB hugepages reported on node 1 00:13:07.800 [2024-05-15 01:00:20.124139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.059 [2024-05-15 01:00:20.246846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.318 Running I/O for 10 seconds... 00:13:18.306 00:13:18.306 Latency(us) 00:13:18.306 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.306 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:18.306 Verification LBA range: start 0x0 length 0x1000 00:13:18.306 Nvme1n1 : 10.01 5914.49 46.21 0.00 0.00 21581.84 958.77 44661.57 00:13:18.306 =================================================================================================================== 00:13:18.306 Total : 5914.49 46.21 0.00 0.00 21581.84 958.77 44661.57 00:13:18.563 01:00:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1235365 00:13:18.563 01:00:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:13:18.563 01:00:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:18.563 01:00:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:18.563 01:00:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:18.563 01:00:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:18.564 01:00:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:18.564 01:00:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:18.564 01:00:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:18.564 { 00:13:18.564 "params": { 00:13:18.564 "name": "Nvme$subsystem", 00:13:18.564 "trtype": "$TEST_TRANSPORT", 00:13:18.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:18.564 "adrfam": "ipv4", 00:13:18.564 "trsvcid": "$NVMF_PORT", 00:13:18.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:18.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:18.564 "hdgst": ${hdgst:-false}, 00:13:18.564 "ddgst": ${ddgst:-false} 00:13:18.564 }, 00:13:18.564 "method": "bdev_nvme_attach_controller" 00:13:18.564 } 00:13:18.564 EOF 00:13:18.564 )") 00:13:18.564 [2024-05-15 01:00:30.914915] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.564 [2024-05-15 01:00:30.914985] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.564 01:00:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:18.564 01:00:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:18.564 01:00:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:18.564 01:00:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:18.564 "params": { 00:13:18.564 "name": "Nvme1", 00:13:18.564 "trtype": "tcp", 00:13:18.564 "traddr": "10.0.0.2", 00:13:18.564 "adrfam": "ipv4", 00:13:18.564 "trsvcid": "4420", 00:13:18.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:18.564 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:18.564 "hdgst": false, 00:13:18.564 "ddgst": false 00:13:18.564 }, 00:13:18.564 "method": "bdev_nvme_attach_controller" 00:13:18.564 }' 00:13:18.564 [2024-05-15 01:00:30.922880] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.564 [2024-05-15 01:00:30.922908] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.564 [2024-05-15 01:00:30.930895] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.564 [2024-05-15 01:00:30.930918] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.564 [2024-05-15 01:00:30.938910] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.564 [2024-05-15 01:00:30.938940] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.564 [2024-05-15 01:00:30.946928] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.564 [2024-05-15 01:00:30.946976] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.564 [2024-05-15 01:00:30.952611] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:13:18.564 [2024-05-15 01:00:30.952674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1235365 ] 00:13:18.824 [2024-05-15 01:00:30.954985] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.824 [2024-05-15 01:00:30.955008] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.824 [2024-05-15 01:00:30.962995] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.824 [2024-05-15 01:00:30.963017] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.824 [2024-05-15 01:00:30.971007] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.824 [2024-05-15 01:00:30.971029] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.824 [2024-05-15 01:00:30.979023] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.824 [2024-05-15 01:00:30.979045] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.824 [2024-05-15 01:00:30.987045] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.824 [2024-05-15 01:00:30.987067] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.824 [2024-05-15 01:00:30.995083] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.824 [2024-05-15 01:00:30.995105] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.824 [2024-05-15 01:00:31.003106] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.824 [2024-05-15 01:00:31.003128] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.824 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.824 [2024-05-15 01:00:31.011131] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.824 [2024-05-15 01:00:31.011154] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.824 [2024-05-15 01:00:31.019156] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.824 [2024-05-15 01:00:31.019180] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.824 [2024-05-15 01:00:31.027175] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.824 [2024-05-15 01:00:31.027197] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.824 [2024-05-15 01:00:31.035196] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.824 [2024-05-15 01:00:31.035237] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.824 [2024-05-15 01:00:31.042187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.824 [2024-05-15 01:00:31.043219] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.824 [2024-05-15 01:00:31.043242] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.824 [2024-05-15 01:00:31.051312] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.824 [2024-05-15 01:00:31.051345] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.824 [2024-05-15 01:00:31.059317] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.824 [2024-05-15 01:00:31.059349] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.824 [2024-05-15 01:00:31.067309] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.824 [2024-05-15 01:00:31.067330] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.824 [2024-05-15 01:00:31.075328] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.824 [2024-05-15 01:00:31.075349] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.824 [2024-05-15 01:00:31.083350] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.824 [2024-05-15 01:00:31.083371] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.824 [2024-05-15 01:00:31.091402] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.824 [2024-05-15 01:00:31.091423] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.824 [2024-05-15 01:00:31.099388] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.824 [2024-05-15 01:00:31.099408] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.824 [2024-05-15 01:00:31.107430] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.824 [2024-05-15 01:00:31.107463] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.824 [2024-05-15 01:00:31.115463] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.824 [2024-05-15 01:00:31.115509] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.824 [2024-05-15 01:00:31.123452] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.824 [2024-05-15 01:00:31.123474] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.824 [2024-05-15 01:00:31.131473] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.824 [2024-05-15 01:00:31.131494] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.824 [2024-05-15 01:00:31.139494] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.824 [2024-05-15 01:00:31.139515] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.824 [2024-05-15 01:00:31.147517] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.824 [2024-05-15 01:00:31.147538] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.824 [2024-05-15 01:00:31.155539] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.824 [2024-05-15 01:00:31.155560] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.824 [2024-05-15 01:00:31.161347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.824 [2024-05-15 01:00:31.163560] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.824 [2024-05-15 01:00:31.163581] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.824 [2024-05-15 01:00:31.171581] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.824 [2024-05-15 01:00:31.171601] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.824 [2024-05-15 01:00:31.179643] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.824 [2024-05-15 01:00:31.179676] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.824 [2024-05-15 01:00:31.187658] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.824 [2024-05-15 01:00:31.187696] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.824 [2024-05-15 01:00:31.195682] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.824 [2024-05-15 01:00:31.195717] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.824 [2024-05-15 01:00:31.203708] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.825 [2024-05-15 01:00:31.203744] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.825 [2024-05-15 01:00:31.211728] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.825 [2024-05-15 01:00:31.211765] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.083 [2024-05-15 01:00:31.219749] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.083 [2024-05-15 01:00:31.219784] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.083 [2024-05-15 01:00:31.227760] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.083 [2024-05-15 01:00:31.227805] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.083 [2024-05-15 01:00:31.235761] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.083 [2024-05-15 01:00:31.235785] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.083 [2024-05-15 01:00:31.243814] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.083 [2024-05-15 01:00:31.243848] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.083 [2024-05-15 01:00:31.251838] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.083 [2024-05-15 01:00:31.251874] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.083 [2024-05-15 01:00:31.259825] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.083 [2024-05-15 01:00:31.259846] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.083 [2024-05-15 01:00:31.267845] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.083 [2024-05-15 01:00:31.267865] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.083 [2024-05-15 01:00:31.276042] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.083 [2024-05-15 01:00:31.276068] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.083 [2024-05-15 01:00:31.284027] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.083 [2024-05-15 01:00:31.284053] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.083 [2024-05-15 01:00:31.292015] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.083 [2024-05-15 01:00:31.292039] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.083 [2024-05-15 01:00:31.300046] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.083 [2024-05-15 01:00:31.300070] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.083 [2024-05-15 01:00:31.308049] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.083 [2024-05-15 01:00:31.308073] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.083 [2024-05-15 01:00:31.316070] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.083 [2024-05-15 01:00:31.316093] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.083 [2024-05-15 01:00:31.324093] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.083 [2024-05-15 01:00:31.324116] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.083 [2024-05-15 01:00:31.332118] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.083 [2024-05-15 01:00:31.332141] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.084 [2024-05-15 01:00:31.340141] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.084 [2024-05-15 01:00:31.340163] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.084 [2024-05-15 01:00:31.348260] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.084 [2024-05-15 01:00:31.348284] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.084 [2024-05-15 01:00:31.356190] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.084 [2024-05-15 01:00:31.356230] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.084 [2024-05-15 01:00:31.364233] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.084 [2024-05-15 01:00:31.364257] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.084 [2024-05-15 01:00:31.372237] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.084 [2024-05-15 01:00:31.372278] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.084 [2024-05-15 01:00:31.380295] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.084 [2024-05-15 01:00:31.380325] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.084 [2024-05-15 01:00:31.388314] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.084 [2024-05-15 01:00:31.388339] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.084 Running I/O for 5 seconds... 00:13:19.084 [2024-05-15 01:00:31.396407] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.084 [2024-05-15 01:00:31.396433] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.084 [2024-05-15 01:00:31.410211] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.084 [2024-05-15 01:00:31.410240] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.084 [2024-05-15 01:00:31.421844] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.084 [2024-05-15 01:00:31.421873] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.084 [2024-05-15 01:00:31.433781] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.084 [2024-05-15 01:00:31.433810] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.084 [2024-05-15 01:00:31.445050] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.084 [2024-05-15 01:00:31.445078] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.084 [2024-05-15 01:00:31.456559] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.084 [2024-05-15 01:00:31.456586] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.084 [2024-05-15 01:00:31.467983] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.084 [2024-05-15 01:00:31.468011] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.342 [2024-05-15 01:00:31.479427] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.342 [2024-05-15 01:00:31.479456] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.342 [2024-05-15 01:00:31.491343] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.342 [2024-05-15 01:00:31.491371] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.342 [2024-05-15 01:00:31.503000] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.342 [2024-05-15 01:00:31.503029] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.342 [2024-05-15 01:00:31.515108] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.342 [2024-05-15 01:00:31.515137] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.342 [2024-05-15 01:00:31.527770] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.342 [2024-05-15 01:00:31.527800] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.342 [2024-05-15 01:00:31.540188] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.342 [2024-05-15 01:00:31.540237] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.342 [2024-05-15 01:00:31.552651] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.342 [2024-05-15 01:00:31.552679] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.342 [2024-05-15 01:00:31.564576] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.342 [2024-05-15 01:00:31.564604] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.342 [2024-05-15 01:00:31.576234] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.342 [2024-05-15 01:00:31.576263] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.342 [2024-05-15 01:00:31.587909] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.342 [2024-05-15 01:00:31.587948] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.342 [2024-05-15 01:00:31.599721] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.342 [2024-05-15 01:00:31.599762] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.342 [2024-05-15 01:00:31.610913] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.342 [2024-05-15 01:00:31.610948] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.342 [2024-05-15 01:00:31.622130] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.342 [2024-05-15 01:00:31.622159] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.342 [2024-05-15 01:00:31.633738] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.342 [2024-05-15 01:00:31.633773] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.342 [2024-05-15 01:00:31.645228] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.342 [2024-05-15 01:00:31.645255] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.342 [2024-05-15 01:00:31.657116] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.342 [2024-05-15 01:00:31.657144] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.342 [2024-05-15 01:00:31.669144] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.342 [2024-05-15 01:00:31.669173] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.342 [2024-05-15 01:00:31.680646] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.342 [2024-05-15 01:00:31.680673] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.342 [2024-05-15 01:00:31.692270] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.342 [2024-05-15 01:00:31.692298] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.342 [2024-05-15 01:00:31.703867] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.342 [2024-05-15 01:00:31.703895] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.342 [2024-05-15 01:00:31.715604] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.342 [2024-05-15 01:00:31.715632] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.342 [2024-05-15 01:00:31.727401] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.342 [2024-05-15 01:00:31.727429] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.602 [2024-05-15 01:00:31.738878] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.602 [2024-05-15 01:00:31.738906] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.602 [2024-05-15 01:00:31.750572] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.602 [2024-05-15 01:00:31.750600] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.602 [2024-05-15 01:00:31.762083] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.602 [2024-05-15 01:00:31.762112] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.602 [2024-05-15 01:00:31.773555] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.602 [2024-05-15 01:00:31.773594] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.602 [2024-05-15 01:00:31.785224] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.602 [2024-05-15 01:00:31.785265] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.602 [2024-05-15 01:00:31.797210] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.602 [2024-05-15 01:00:31.797238] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.602 [2024-05-15 01:00:31.809105] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.602 [2024-05-15 01:00:31.809133] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.602 [2024-05-15 01:00:31.820905] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.602 [2024-05-15 01:00:31.820958] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.602 [2024-05-15 01:00:31.832993] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.602 [2024-05-15 01:00:31.833023] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.602 [2024-05-15 01:00:31.844650] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.602 [2024-05-15 01:00:31.844678] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.602 [2024-05-15 01:00:31.856695] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.602 [2024-05-15 01:00:31.856722] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.602 [2024-05-15 01:00:31.869310] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.602 [2024-05-15 01:00:31.869337] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.602 [2024-05-15 01:00:31.881707] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.602 [2024-05-15 01:00:31.881733] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.602 [2024-05-15 01:00:31.893442] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.602 [2024-05-15 01:00:31.893469] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.602 [2024-05-15 01:00:31.905625] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.602 [2024-05-15 01:00:31.905661] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.602 [2024-05-15 01:00:31.917474] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.602 [2024-05-15 01:00:31.917501] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.603 [2024-05-15 01:00:31.929217] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.603 [2024-05-15 01:00:31.929259] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.603 [2024-05-15 01:00:31.940504] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.603 [2024-05-15 01:00:31.940531] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.603 [2024-05-15 01:00:31.952527] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.603 [2024-05-15 01:00:31.952577] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.603 [2024-05-15 01:00:31.964454] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.603 [2024-05-15 01:00:31.964480] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.603 [2024-05-15 01:00:31.976025] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.603 [2024-05-15 01:00:31.976064] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.603 [2024-05-15 01:00:31.989448] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.603 [2024-05-15 01:00:31.989475] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.862 [2024-05-15 01:00:32.000071] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.862 [2024-05-15 01:00:32.000099] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.862 [2024-05-15 01:00:32.012848] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.862 [2024-05-15 01:00:32.012877] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.862 [2024-05-15 01:00:32.025088] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.862 [2024-05-15 01:00:32.025116] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.862 [2024-05-15 01:00:32.036982] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.862 [2024-05-15 01:00:32.037010] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.862 [2024-05-15 01:00:32.050573] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.862 [2024-05-15 01:00:32.050600] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.862 [2024-05-15 01:00:32.061297] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.862 [2024-05-15 01:00:32.061325] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.862 [2024-05-15 01:00:32.073631] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.862 [2024-05-15 01:00:32.073658] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.862 [2024-05-15 01:00:32.085394] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.862 [2024-05-15 01:00:32.085423] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.862 [2024-05-15 01:00:32.097035] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.862 [2024-05-15 01:00:32.097064] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.862 [2024-05-15 01:00:32.108351] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.862 [2024-05-15 01:00:32.108379] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.862 [2024-05-15 01:00:32.119853] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.862 [2024-05-15 01:00:32.119881] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.862 [2024-05-15 01:00:32.131656] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.862 [2024-05-15 01:00:32.131701] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.862 [2024-05-15 01:00:32.143796] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.862 [2024-05-15 01:00:32.143823] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.862 [2024-05-15 01:00:32.155827] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.862 [2024-05-15 01:00:32.155854] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.862 [2024-05-15 01:00:32.168231] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.862 [2024-05-15 01:00:32.168284] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.862 [2024-05-15 01:00:32.180666] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.862 [2024-05-15 01:00:32.180694] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.862 [2024-05-15 01:00:32.192463] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.862 [2024-05-15 01:00:32.192490] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.862 [2024-05-15 01:00:32.203524] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.863 [2024-05-15 01:00:32.203551] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.863 [2024-05-15 01:00:32.215524] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.863 [2024-05-15 01:00:32.215551] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.863 [2024-05-15 01:00:32.227874] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.863 [2024-05-15 01:00:32.227901] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.863 [2024-05-15 01:00:32.240062] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.863 [2024-05-15 01:00:32.240090] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:19.863 [2024-05-15 01:00:32.251672] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:19.863 [2024-05-15 01:00:32.251699] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.121 [2024-05-15 01:00:32.263879] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.122 [2024-05-15 01:00:32.263907] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.122 [2024-05-15 01:00:32.275774] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.122 [2024-05-15 01:00:32.275816] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.122 [2024-05-15 01:00:32.287592] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.122 [2024-05-15 01:00:32.287619] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.122 [2024-05-15 01:00:32.299122] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.122 [2024-05-15 01:00:32.299150] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.122 [2024-05-15 01:00:32.310648] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.122 [2024-05-15 01:00:32.310675] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.122 [2024-05-15 01:00:32.322295] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.122 [2024-05-15 01:00:32.322322] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.122 [2024-05-15 01:00:32.333970] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.122 [2024-05-15 01:00:32.333999] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.122 [2024-05-15 01:00:32.345614] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.122 [2024-05-15 01:00:32.345642] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.122 [2024-05-15 01:00:32.357038] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.122 [2024-05-15 01:00:32.357076] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.122 [2024-05-15 01:00:32.368786] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.122 [2024-05-15 01:00:32.368814] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.122 [2024-05-15 01:00:32.380554] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.122 [2024-05-15 01:00:32.380583] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.122 [2024-05-15 01:00:32.392742] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.122 [2024-05-15 01:00:32.392769] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.122 [2024-05-15 01:00:32.404450] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.122 [2024-05-15 01:00:32.404479] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.122 [2024-05-15 01:00:32.416112] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.122 [2024-05-15 01:00:32.416140] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.122 [2024-05-15 01:00:32.428125] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.122 [2024-05-15 01:00:32.428152] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.122 [2024-05-15 01:00:32.439847] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.122 [2024-05-15 01:00:32.439875] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.122 [2024-05-15 01:00:32.451598] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.122 [2024-05-15 01:00:32.451625] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.122 [2024-05-15 01:00:32.463167] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.122 [2024-05-15 01:00:32.463195] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.122 [2024-05-15 01:00:32.475177] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.122 [2024-05-15 01:00:32.475204] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.122 [2024-05-15 01:00:32.487163] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.122 [2024-05-15 01:00:32.487199] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.122 [2024-05-15 01:00:32.498803] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.122 [2024-05-15 01:00:32.498830] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.122 [2024-05-15 01:00:32.510509] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.122 [2024-05-15 01:00:32.510561] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.380 [2024-05-15 01:00:32.521813] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.380 [2024-05-15 01:00:32.521841] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.380 [2024-05-15 01:00:32.533584] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.380 [2024-05-15 01:00:32.533611] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.380 [2024-05-15 01:00:32.544807] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.380 [2024-05-15 01:00:32.544835] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.380 [2024-05-15 01:00:32.556456] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.380 [2024-05-15 01:00:32.556483] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.380 [2024-05-15 01:00:32.568224] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.380 [2024-05-15 01:00:32.568266] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.380 [2024-05-15 01:00:32.580375] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.380 [2024-05-15 01:00:32.580401] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.380 [2024-05-15 01:00:32.592660] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.380 [2024-05-15 01:00:32.592686] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.380 [2024-05-15 01:00:32.605453] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.380 [2024-05-15 01:00:32.605480] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.380 [2024-05-15 01:00:32.617520] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.380 [2024-05-15 01:00:32.617547] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.380 [2024-05-15 01:00:32.629302] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.380 [2024-05-15 01:00:32.629330] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.380 [2024-05-15 01:00:32.640628] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.380 [2024-05-15 01:00:32.640656] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.380 [2024-05-15 01:00:32.652286] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.380 [2024-05-15 01:00:32.652314] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.380 [2024-05-15 01:00:32.663876] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.380 [2024-05-15 01:00:32.663903] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.380 [2024-05-15 01:00:32.675742] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.380 [2024-05-15 01:00:32.675770] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.380 [2024-05-15 01:00:32.687188] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.380 [2024-05-15 01:00:32.687215] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.380 [2024-05-15 01:00:32.698541] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.380 [2024-05-15 01:00:32.698569] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.380 [2024-05-15 01:00:32.709986] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.380 [2024-05-15 01:00:32.710022] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.380 [2024-05-15 01:00:32.721675] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.380 [2024-05-15 01:00:32.721702] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.380 [2024-05-15 01:00:32.733797] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.380 [2024-05-15 01:00:32.733825] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.380 [2024-05-15 01:00:32.745766] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.380 [2024-05-15 01:00:32.745793] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.380 [2024-05-15 01:00:32.757158] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.380 [2024-05-15 01:00:32.757186] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.380 [2024-05-15 01:00:32.768785] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.380 [2024-05-15 01:00:32.768812] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.638 [2024-05-15 01:00:32.780436] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.638 [2024-05-15 01:00:32.780463] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.638 [2024-05-15 01:00:32.794342] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.638 [2024-05-15 01:00:32.794369] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.638 [2024-05-15 01:00:32.805634] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.638 [2024-05-15 01:00:32.805661] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.638 [2024-05-15 01:00:32.817475] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.638 [2024-05-15 01:00:32.817502] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.638 [2024-05-15 01:00:32.829430] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.638 [2024-05-15 01:00:32.829471] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.638 [2024-05-15 01:00:32.840861] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.638 [2024-05-15 01:00:32.840888] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.638 [2024-05-15 01:00:32.852900] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.638 [2024-05-15 01:00:32.852962] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.638 [2024-05-15 01:00:32.864621] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.638 [2024-05-15 01:00:32.864649] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.638 [2024-05-15 01:00:32.876708] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.638 [2024-05-15 01:00:32.876750] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.638 [2024-05-15 01:00:32.888854] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.638 [2024-05-15 01:00:32.888883] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.638 [2024-05-15 01:00:32.900693] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.638 [2024-05-15 01:00:32.900721] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.638 [2024-05-15 01:00:32.912693] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.638 [2024-05-15 01:00:32.912744] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.638 [2024-05-15 01:00:32.924421] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.638 [2024-05-15 01:00:32.924448] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.638 [2024-05-15 01:00:32.936226] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.638 [2024-05-15 01:00:32.936288] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.638 [2024-05-15 01:00:32.948761] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.638 [2024-05-15 01:00:32.948802] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.638 [2024-05-15 01:00:32.961093] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.638 [2024-05-15 01:00:32.961121] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.638 [2024-05-15 01:00:32.973120] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.638 [2024-05-15 01:00:32.973148] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.638 [2024-05-15 01:00:32.984584] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.638 [2024-05-15 01:00:32.984610] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.638 [2024-05-15 01:00:32.995760] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.638 [2024-05-15 01:00:32.995788] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.638 [2024-05-15 01:00:33.007587] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.638 [2024-05-15 01:00:33.007626] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.638 [2024-05-15 01:00:33.019646] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.638 [2024-05-15 01:00:33.019673] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.897 [2024-05-15 01:00:33.031600] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.897 [2024-05-15 01:00:33.031627] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.897 [2024-05-15 01:00:33.043163] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.897 [2024-05-15 01:00:33.043191] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.897 [2024-05-15 01:00:33.054763] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.897 [2024-05-15 01:00:33.054790] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.897 [2024-05-15 01:00:33.066727] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.897 [2024-05-15 01:00:33.066755] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.897 [2024-05-15 01:00:33.078693] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.897 [2024-05-15 01:00:33.078720] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.897 [2024-05-15 01:00:33.091119] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.897 [2024-05-15 01:00:33.091156] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.897 [2024-05-15 01:00:33.102568] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.897 [2024-05-15 01:00:33.102595] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.897 [2024-05-15 01:00:33.115489] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.897 [2024-05-15 01:00:33.115516] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.897 [2024-05-15 01:00:33.125895] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.897 [2024-05-15 01:00:33.125923] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.897 [2024-05-15 01:00:33.138176] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.897 [2024-05-15 01:00:33.138204] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.897 [2024-05-15 01:00:33.149691] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.897 [2024-05-15 01:00:33.149719] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.897 [2024-05-15 01:00:33.161608] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.897 [2024-05-15 01:00:33.161645] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.897 [2024-05-15 01:00:33.175514] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.897 [2024-05-15 01:00:33.175556] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.897 [2024-05-15 01:00:33.186525] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.897 [2024-05-15 01:00:33.186554] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.897 [2024-05-15 01:00:33.199409] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.897 [2024-05-15 01:00:33.199437] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.897 [2024-05-15 01:00:33.210829] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.897 [2024-05-15 01:00:33.210857] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.897 [2024-05-15 01:00:33.222337] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.897 [2024-05-15 01:00:33.222365] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.897 [2024-05-15 01:00:33.234540] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.897 [2024-05-15 01:00:33.234569] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.897 [2024-05-15 01:00:33.246064] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.897 [2024-05-15 01:00:33.246091] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.897 [2024-05-15 01:00:33.257441] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.897 [2024-05-15 01:00:33.257469] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.897 [2024-05-15 01:00:33.269167] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.897 [2024-05-15 01:00:33.269195] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:20.897 [2024-05-15 01:00:33.280902] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:20.897 [2024-05-15 01:00:33.280937] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.157 [2024-05-15 01:00:33.293146] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.157 [2024-05-15 01:00:33.293175] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.157 [2024-05-15 01:00:33.304788] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.157 [2024-05-15 01:00:33.304830] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.157 [2024-05-15 01:00:33.316610] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.157 [2024-05-15 01:00:33.316638] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.157 [2024-05-15 01:00:33.327877] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.157 [2024-05-15 01:00:33.327905] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.157 [2024-05-15 01:00:33.339573] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.157 [2024-05-15 01:00:33.339600] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.157 [2024-05-15 01:00:33.351189] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.157 [2024-05-15 01:00:33.351217] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.157 [2024-05-15 01:00:33.363048] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.157 [2024-05-15 01:00:33.363077] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.157 [2024-05-15 01:00:33.375962] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.157 [2024-05-15 01:00:33.375991] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.157 [2024-05-15 01:00:33.386561] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.157 [2024-05-15 01:00:33.386613] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.157 [2024-05-15 01:00:33.399341] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.157 [2024-05-15 01:00:33.399368] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.157 [2024-05-15 01:00:33.411243] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.157 [2024-05-15 01:00:33.411270] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.157 [2024-05-15 01:00:33.422904] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.157 [2024-05-15 01:00:33.422938] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.157 [2024-05-15 01:00:33.434310] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.157 [2024-05-15 01:00:33.434338] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.157 [2024-05-15 01:00:33.446234] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.157 [2024-05-15 01:00:33.446262] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.157 [2024-05-15 01:00:33.458442] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.157 [2024-05-15 01:00:33.458469] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.157 [2024-05-15 01:00:33.470675] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.157 [2024-05-15 01:00:33.470702] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.157 [2024-05-15 01:00:33.482477] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.157 [2024-05-15 01:00:33.482504] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.157 [2024-05-15 01:00:33.496502] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.157 [2024-05-15 01:00:33.496530] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.157 [2024-05-15 01:00:33.507525] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.157 [2024-05-15 01:00:33.507553] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.157 [2024-05-15 01:00:33.518477] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.157 [2024-05-15 01:00:33.518504] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.157 [2024-05-15 01:00:33.529836] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.157 [2024-05-15 01:00:33.529864] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.157 [2024-05-15 01:00:33.541817] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.157 [2024-05-15 01:00:33.541845] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.419 [2024-05-15 01:00:33.553167] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.419 [2024-05-15 01:00:33.553196] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.419 [2024-05-15 01:00:33.564902] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.419 [2024-05-15 01:00:33.564952] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.419 [2024-05-15 01:00:33.576878] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.419 [2024-05-15 01:00:33.576937] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.419 [2024-05-15 01:00:33.588799] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.419 [2024-05-15 01:00:33.588827] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.419 [2024-05-15 01:00:33.600830] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.419 [2024-05-15 01:00:33.600859] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.419 [2024-05-15 01:00:33.612453] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.419 [2024-05-15 01:00:33.612482] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.419 [2024-05-15 01:00:33.624147] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.419 [2024-05-15 01:00:33.624175] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.419 [2024-05-15 01:00:33.636577] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.419 [2024-05-15 01:00:33.636604] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.419 [2024-05-15 01:00:33.647866] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.419 [2024-05-15 01:00:33.647893] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.419 [2024-05-15 01:00:33.659353] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.419 [2024-05-15 01:00:33.659380] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.419 [2024-05-15 01:00:33.671025] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.419 [2024-05-15 01:00:33.671053] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.419 [2024-05-15 01:00:33.682833] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.419 [2024-05-15 01:00:33.682859] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.419 [2024-05-15 01:00:33.695100] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.419 [2024-05-15 01:00:33.695128] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.419 [2024-05-15 01:00:33.706904] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.419 [2024-05-15 01:00:33.706953] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.419 [2024-05-15 01:00:33.719211] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.419 [2024-05-15 01:00:33.719253] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.419 [2024-05-15 01:00:33.731425] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.419 [2024-05-15 01:00:33.731452] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.419 [2024-05-15 01:00:33.743035] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.419 [2024-05-15 01:00:33.743063] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.419 [2024-05-15 01:00:33.754580] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.419 [2024-05-15 01:00:33.754607] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.419 [2024-05-15 01:00:33.766270] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.419 [2024-05-15 01:00:33.766298] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.419 [2024-05-15 01:00:33.778333] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.419 [2024-05-15 01:00:33.778360] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.419 [2024-05-15 01:00:33.790108] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.419 [2024-05-15 01:00:33.790136] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.419 [2024-05-15 01:00:33.801718] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.419 [2024-05-15 01:00:33.801745] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.680 [2024-05-15 01:00:33.813976] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.680 [2024-05-15 01:00:33.814005] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.680 [2024-05-15 01:00:33.826118] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.680 [2024-05-15 01:00:33.826145] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.680 [2024-05-15 01:00:33.838244] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.680 [2024-05-15 01:00:33.838271] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.680 [2024-05-15 01:00:33.850108] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.680 [2024-05-15 01:00:33.850137] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.680 [2024-05-15 01:00:33.861823] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.680 [2024-05-15 01:00:33.861850] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.680 [2024-05-15 01:00:33.873704] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.680 [2024-05-15 01:00:33.873731] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.680 [2024-05-15 01:00:33.885948] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.680 [2024-05-15 01:00:33.885976] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.680 [2024-05-15 01:00:33.898052] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.680 [2024-05-15 01:00:33.898089] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.680 [2024-05-15 01:00:33.910654] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.680 [2024-05-15 01:00:33.910690] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.680 [2024-05-15 01:00:33.922419] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.680 [2024-05-15 01:00:33.922447] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.680 [2024-05-15 01:00:33.934043] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.680 [2024-05-15 01:00:33.934070] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.680 [2024-05-15 01:00:33.946118] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.680 [2024-05-15 01:00:33.946146] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.680 [2024-05-15 01:00:33.958880] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.680 [2024-05-15 01:00:33.958908] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.680 [2024-05-15 01:00:33.971611] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.680 [2024-05-15 01:00:33.971639] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.680 [2024-05-15 01:00:33.983653] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.680 [2024-05-15 01:00:33.983681] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.680 [2024-05-15 01:00:33.995253] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.680 [2024-05-15 01:00:33.995281] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.680 [2024-05-15 01:00:34.007640] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.680 [2024-05-15 01:00:34.007669] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.681 [2024-05-15 01:00:34.019119] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.681 [2024-05-15 01:00:34.019147] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.681 [2024-05-15 01:00:34.030701] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.681 [2024-05-15 01:00:34.030729] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.681 [2024-05-15 01:00:34.042344] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.681 [2024-05-15 01:00:34.042381] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.681 [2024-05-15 01:00:34.053963] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.681 [2024-05-15 01:00:34.054003] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.681 [2024-05-15 01:00:34.065943] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.681 [2024-05-15 01:00:34.065971] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.955 [2024-05-15 01:00:34.078383] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.955 [2024-05-15 01:00:34.078413] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.955 [2024-05-15 01:00:34.093419] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.955 [2024-05-15 01:00:34.093449] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.956 [2024-05-15 01:00:34.105606] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.956 [2024-05-15 01:00:34.105635] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.956 [2024-05-15 01:00:34.118037] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.956 [2024-05-15 01:00:34.118066] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.956 [2024-05-15 01:00:34.129892] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.956 [2024-05-15 01:00:34.129921] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.956 [2024-05-15 01:00:34.141861] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.956 [2024-05-15 01:00:34.141889] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.956 [2024-05-15 01:00:34.153703] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.956 [2024-05-15 01:00:34.153732] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.956 [2024-05-15 01:00:34.165254] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.956 [2024-05-15 01:00:34.165292] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.956 [2024-05-15 01:00:34.176507] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.956 [2024-05-15 01:00:34.176533] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.956 [2024-05-15 01:00:34.187671] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.956 [2024-05-15 01:00:34.187698] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.956 [2024-05-15 01:00:34.199129] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.956 [2024-05-15 01:00:34.199157] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.956 [2024-05-15 01:00:34.210586] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.956 [2024-05-15 01:00:34.210614] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.956 [2024-05-15 01:00:34.222045] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.956 [2024-05-15 01:00:34.222072] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.956 [2024-05-15 01:00:34.233708] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.956 [2024-05-15 01:00:34.233735] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.956 [2024-05-15 01:00:34.245416] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.956 [2024-05-15 01:00:34.245444] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.956 [2024-05-15 01:00:34.257954] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.956 [2024-05-15 01:00:34.258009] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.956 [2024-05-15 01:00:34.270046] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.956 [2024-05-15 01:00:34.270073] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.956 [2024-05-15 01:00:34.281877] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.956 [2024-05-15 01:00:34.281948] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.956 [2024-05-15 01:00:34.293451] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.956 [2024-05-15 01:00:34.293478] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.956 [2024-05-15 01:00:34.305206] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.956 [2024-05-15 01:00:34.305233] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.956 [2024-05-15 01:00:34.316487] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.956 [2024-05-15 01:00:34.316513] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.956 [2024-05-15 01:00:34.327969] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:21.956 [2024-05-15 01:00:34.327996] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.216 [2024-05-15 01:00:34.341334] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.216 [2024-05-15 01:00:34.341363] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.216 [2024-05-15 01:00:34.352681] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.216 [2024-05-15 01:00:34.352717] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.216 [2024-05-15 01:00:34.364910] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.216 [2024-05-15 01:00:34.364955] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.216 [2024-05-15 01:00:34.377367] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.216 [2024-05-15 01:00:34.377395] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.216 [2024-05-15 01:00:34.389093] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.216 [2024-05-15 01:00:34.389122] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.216 [2024-05-15 01:00:34.401360] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.216 [2024-05-15 01:00:34.401386] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.216 [2024-05-15 01:00:34.413609] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.216 [2024-05-15 01:00:34.413636] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.216 [2024-05-15 01:00:34.425391] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.216 [2024-05-15 01:00:34.425418] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.216 [2024-05-15 01:00:34.437273] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.216 [2024-05-15 01:00:34.437300] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.216 [2024-05-15 01:00:34.449003] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.216 [2024-05-15 01:00:34.449031] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.216 [2024-05-15 01:00:34.461202] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.216 [2024-05-15 01:00:34.461229] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.216 [2024-05-15 01:00:34.472844] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.216 [2024-05-15 01:00:34.472872] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.216 [2024-05-15 01:00:34.484999] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.216 [2024-05-15 01:00:34.485027] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.216 [2024-05-15 01:00:34.496768] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.217 [2024-05-15 01:00:34.496796] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.217 [2024-05-15 01:00:34.508771] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.217 [2024-05-15 01:00:34.508806] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.217 [2024-05-15 01:00:34.520606] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.217 [2024-05-15 01:00:34.520648] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.217 [2024-05-15 01:00:34.532847] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.217 [2024-05-15 01:00:34.532874] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.217 [2024-05-15 01:00:34.545073] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.217 [2024-05-15 01:00:34.545108] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.217 [2024-05-15 01:00:34.557394] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.217 [2024-05-15 01:00:34.557434] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.217 [2024-05-15 01:00:34.569467] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.217 [2024-05-15 01:00:34.569507] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.217 [2024-05-15 01:00:34.581488] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.217 [2024-05-15 01:00:34.581515] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.217 [2024-05-15 01:00:34.593651] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.217 [2024-05-15 01:00:34.593678] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.217 [2024-05-15 01:00:34.605091] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.217 [2024-05-15 01:00:34.605119] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.476 [2024-05-15 01:00:34.617171] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.476 [2024-05-15 01:00:34.617200] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.476 [2024-05-15 01:00:34.628469] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.476 [2024-05-15 01:00:34.628507] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.476 [2024-05-15 01:00:34.639955] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.476 [2024-05-15 01:00:34.639983] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.476 [2024-05-15 01:00:34.651578] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.476 [2024-05-15 01:00:34.651615] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.476 [2024-05-15 01:00:34.663069] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.476 [2024-05-15 01:00:34.663097] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.476 [2024-05-15 01:00:34.674630] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.476 [2024-05-15 01:00:34.674656] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.476 [2024-05-15 01:00:34.686549] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.476 [2024-05-15 01:00:34.686584] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.476 [2024-05-15 01:00:34.698360] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.476 [2024-05-15 01:00:34.698388] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.476 [2024-05-15 01:00:34.712197] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.476 [2024-05-15 01:00:34.712225] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.476 [2024-05-15 01:00:34.723382] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.476 [2024-05-15 01:00:34.723409] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.476 [2024-05-15 01:00:34.734939] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.476 [2024-05-15 01:00:34.734974] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.476 [2024-05-15 01:00:34.746577] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.476 [2024-05-15 01:00:34.746619] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.476 [2024-05-15 01:00:34.758083] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.476 [2024-05-15 01:00:34.758111] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.476 [2024-05-15 01:00:34.769607] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.476 [2024-05-15 01:00:34.769635] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.476 [2024-05-15 01:00:34.781457] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.476 [2024-05-15 01:00:34.781483] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.476 [2024-05-15 01:00:34.793294] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.476 [2024-05-15 01:00:34.793322] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.476 [2024-05-15 01:00:34.805331] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.476 [2024-05-15 01:00:34.805371] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.476 [2024-05-15 01:00:34.817295] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.476 [2024-05-15 01:00:34.817323] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.476 [2024-05-15 01:00:34.829479] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.476 [2024-05-15 01:00:34.829506] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.476 [2024-05-15 01:00:34.841129] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.476 [2024-05-15 01:00:34.841157] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.476 [2024-05-15 01:00:34.853200] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.476 [2024-05-15 01:00:34.853228] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.476 [2024-05-15 01:00:34.865147] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.476 [2024-05-15 01:00:34.865175] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.735 [2024-05-15 01:00:34.877421] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.735 [2024-05-15 01:00:34.877470] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.735 [2024-05-15 01:00:34.889582] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.735 [2024-05-15 01:00:34.889610] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.735 [2024-05-15 01:00:34.901757] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.735 [2024-05-15 01:00:34.901786] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.735 [2024-05-15 01:00:34.913606] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.735 [2024-05-15 01:00:34.913639] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.735 [2024-05-15 01:00:34.925183] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.735 [2024-05-15 01:00:34.925211] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.735 [2024-05-15 01:00:34.937285] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.735 [2024-05-15 01:00:34.937312] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.735 [2024-05-15 01:00:34.949257] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.735 [2024-05-15 01:00:34.949284] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.735 [2024-05-15 01:00:34.961261] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.735 [2024-05-15 01:00:34.961318] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.735 [2024-05-15 01:00:34.972611] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.735 [2024-05-15 01:00:34.972666] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.735 [2024-05-15 01:00:34.984540] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.735 [2024-05-15 01:00:34.984578] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.735 [2024-05-15 01:00:34.996876] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.735 [2024-05-15 01:00:34.996928] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.735 [2024-05-15 01:00:35.009259] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.735 [2024-05-15 01:00:35.009287] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.735 [2024-05-15 01:00:35.021332] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.735 [2024-05-15 01:00:35.021382] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.735 [2024-05-15 01:00:35.033137] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.735 [2024-05-15 01:00:35.033165] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.735 [2024-05-15 01:00:35.045541] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.735 [2024-05-15 01:00:35.045568] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.735 [2024-05-15 01:00:35.057614] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.735 [2024-05-15 01:00:35.057642] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.735 [2024-05-15 01:00:35.069658] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.735 [2024-05-15 01:00:35.069685] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.735 [2024-05-15 01:00:35.081839] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.735 [2024-05-15 01:00:35.081867] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.735 [2024-05-15 01:00:35.093875] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.735 [2024-05-15 01:00:35.093907] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.735 [2024-05-15 01:00:35.105616] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.735 [2024-05-15 01:00:35.105659] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.735 [2024-05-15 01:00:35.117673] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.735 [2024-05-15 01:00:35.117701] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.996 [2024-05-15 01:00:35.129703] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.996 [2024-05-15 01:00:35.129747] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.996 [2024-05-15 01:00:35.141643] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.996 [2024-05-15 01:00:35.141670] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.996 [2024-05-15 01:00:35.153839] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.996 [2024-05-15 01:00:35.153868] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.996 [2024-05-15 01:00:35.165999] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.996 [2024-05-15 01:00:35.166030] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.996 [2024-05-15 01:00:35.178219] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.996 [2024-05-15 01:00:35.178257] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.996 [2024-05-15 01:00:35.189834] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.996 [2024-05-15 01:00:35.189861] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.996 [2024-05-15 01:00:35.202193] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.996 [2024-05-15 01:00:35.202221] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.996 [2024-05-15 01:00:35.213678] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.996 [2024-05-15 01:00:35.213706] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.996 [2024-05-15 01:00:35.227244] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.996 [2024-05-15 01:00:35.227272] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.996 [2024-05-15 01:00:35.238031] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.996 [2024-05-15 01:00:35.238059] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.996 [2024-05-15 01:00:35.250860] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.996 [2024-05-15 01:00:35.250893] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.996 [2024-05-15 01:00:35.262785] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.996 [2024-05-15 01:00:35.262828] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.996 [2024-05-15 01:00:35.274519] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.996 [2024-05-15 01:00:35.274546] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.996 [2024-05-15 01:00:35.286390] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.996 [2024-05-15 01:00:35.286433] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.996 [2024-05-15 01:00:35.298107] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.996 [2024-05-15 01:00:35.298135] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.996 [2024-05-15 01:00:35.309999] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.996 [2024-05-15 01:00:35.310027] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.996 [2024-05-15 01:00:35.321585] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.996 [2024-05-15 01:00:35.321628] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.996 [2024-05-15 01:00:35.333647] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.996 [2024-05-15 01:00:35.333675] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.996 [2024-05-15 01:00:35.345362] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.996 [2024-05-15 01:00:35.345389] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.996 [2024-05-15 01:00:35.357355] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.996 [2024-05-15 01:00:35.357383] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.996 [2024-05-15 01:00:35.369576] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.996 [2024-05-15 01:00:35.369604] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.996 [2024-05-15 01:00:35.381293] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.996 [2024-05-15 01:00:35.381321] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.255 [2024-05-15 01:00:35.393343] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.255 [2024-05-15 01:00:35.393372] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.255 [2024-05-15 01:00:35.405370] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.255 [2024-05-15 01:00:35.405397] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.255 [2024-05-15 01:00:35.416557] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.255 [2024-05-15 01:00:35.416611] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.255 [2024-05-15 01:00:35.428579] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.255 [2024-05-15 01:00:35.428617] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.255 [2024-05-15 01:00:35.440325] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.255 [2024-05-15 01:00:35.440352] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.255 [2024-05-15 01:00:35.453926] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.255 [2024-05-15 01:00:35.453961] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.255 [2024-05-15 01:00:35.464797] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.255 [2024-05-15 01:00:35.464826] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.255 [2024-05-15 01:00:35.477436] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.255 [2024-05-15 01:00:35.477465] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.255 [2024-05-15 01:00:35.489527] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.255 [2024-05-15 01:00:35.489556] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.255 [2024-05-15 01:00:35.500925] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.255 [2024-05-15 01:00:35.500971] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.255 [2024-05-15 01:00:35.514009] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.255 [2024-05-15 01:00:35.514038] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.255 [2024-05-15 01:00:35.524754] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.255 [2024-05-15 01:00:35.524782] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.255 [2024-05-15 01:00:35.536398] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.255 [2024-05-15 01:00:35.536427] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.255 [2024-05-15 01:00:35.548796] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.255 [2024-05-15 01:00:35.548824] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.255 [2024-05-15 01:00:35.560521] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.255 [2024-05-15 01:00:35.560549] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.255 [2024-05-15 01:00:35.572698] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.255 [2024-05-15 01:00:35.572726] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.255 [2024-05-15 01:00:35.584835] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.255 [2024-05-15 01:00:35.584864] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.255 [2024-05-15 01:00:35.596864] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.255 [2024-05-15 01:00:35.596901] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.255 [2024-05-15 01:00:35.609417] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.255 [2024-05-15 01:00:35.609445] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.255 [2024-05-15 01:00:35.621475] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.255 [2024-05-15 01:00:35.621503] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.255 [2024-05-15 01:00:35.633191] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.255 [2024-05-15 01:00:35.633219] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.255 [2024-05-15 01:00:35.644531] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.255 [2024-05-15 01:00:35.644558] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.514 [2024-05-15 01:00:35.656054] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.514 [2024-05-15 01:00:35.656081] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.514 [2024-05-15 01:00:35.667726] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.514 [2024-05-15 01:00:35.667775] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.514 [2024-05-15 01:00:35.679171] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.514 [2024-05-15 01:00:35.679198] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.514 [2024-05-15 01:00:35.691179] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.514 [2024-05-15 01:00:35.691206] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.514 [2024-05-15 01:00:35.703046] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.514 [2024-05-15 01:00:35.703073] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.514 [2024-05-15 01:00:35.714878] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.514 [2024-05-15 01:00:35.714905] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.514 [2024-05-15 01:00:35.727084] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.514 [2024-05-15 01:00:35.727112] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.514 [2024-05-15 01:00:35.739624] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.514 [2024-05-15 01:00:35.739650] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.514 [2024-05-15 01:00:35.751847] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.514 [2024-05-15 01:00:35.751874] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.514 [2024-05-15 01:00:35.763869] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.514 [2024-05-15 01:00:35.763896] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.514 [2024-05-15 01:00:35.776208] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.514 [2024-05-15 01:00:35.776235] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.514 [2024-05-15 01:00:35.788529] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.514 [2024-05-15 01:00:35.788556] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.514 [2024-05-15 01:00:35.800598] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.514 [2024-05-15 01:00:35.800625] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.514 [2024-05-15 01:00:35.812622] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.514 [2024-05-15 01:00:35.812650] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.514 [2024-05-15 01:00:35.824568] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.514 [2024-05-15 01:00:35.824595] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.514 [2024-05-15 01:00:35.836353] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.514 [2024-05-15 01:00:35.836380] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.514 [2024-05-15 01:00:35.848416] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.514 [2024-05-15 01:00:35.848443] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.514 [2024-05-15 01:00:35.859951] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.514 [2024-05-15 01:00:35.859979] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.514 [2024-05-15 01:00:35.871976] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.514 [2024-05-15 01:00:35.872004] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.514 [2024-05-15 01:00:35.883701] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.514 [2024-05-15 01:00:35.883728] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.514 [2024-05-15 01:00:35.895323] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.514 [2024-05-15 01:00:35.895360] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.773 [2024-05-15 01:00:35.906827] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.773 [2024-05-15 01:00:35.906856] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.773 [2024-05-15 01:00:35.918514] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.774 [2024-05-15 01:00:35.918542] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.774 [2024-05-15 01:00:35.929985] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.774 [2024-05-15 01:00:35.930012] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.774 [2024-05-15 01:00:35.941556] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.774 [2024-05-15 01:00:35.941584] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.774 [2024-05-15 01:00:35.953553] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.774 [2024-05-15 01:00:35.953579] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.774 [2024-05-15 01:00:35.965127] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.774 [2024-05-15 01:00:35.965154] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.774 [2024-05-15 01:00:35.976614] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.774 [2024-05-15 01:00:35.976642] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.774 [2024-05-15 01:00:35.988615] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.774 [2024-05-15 01:00:35.988643] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.774 [2024-05-15 01:00:36.000588] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.774 [2024-05-15 01:00:36.000627] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.774 [2024-05-15 01:00:36.012333] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.774 [2024-05-15 01:00:36.012360] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.774 [2024-05-15 01:00:36.023806] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.774 [2024-05-15 01:00:36.023834] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.774 [2024-05-15 01:00:36.035793] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.774 [2024-05-15 01:00:36.035821] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.774 [2024-05-15 01:00:36.047741] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.774 [2024-05-15 01:00:36.047769] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.774 [2024-05-15 01:00:36.059401] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.774 [2024-05-15 01:00:36.059428] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.774 [2024-05-15 01:00:36.075701] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.774 [2024-05-15 01:00:36.075747] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.774 [2024-05-15 01:00:36.086622] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.774 [2024-05-15 01:00:36.086670] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.774 [2024-05-15 01:00:36.098902] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.774 [2024-05-15 01:00:36.098952] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.774 [2024-05-15 01:00:36.111098] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.774 [2024-05-15 01:00:36.111126] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.774 [2024-05-15 01:00:36.123052] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.774 [2024-05-15 01:00:36.123080] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.774 [2024-05-15 01:00:36.135474] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.774 [2024-05-15 01:00:36.135502] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.774 [2024-05-15 01:00:36.147374] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.774 [2024-05-15 01:00:36.147401] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.774 [2024-05-15 01:00:36.159122] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.774 [2024-05-15 01:00:36.159150] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.033 [2024-05-15 01:00:36.171171] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.033 [2024-05-15 01:00:36.171201] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.033 [2024-05-15 01:00:36.182656] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.033 [2024-05-15 01:00:36.182684] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.033 [2024-05-15 01:00:36.194702] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.033 [2024-05-15 01:00:36.194730] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.033 [2024-05-15 01:00:36.207131] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.033 [2024-05-15 01:00:36.207160] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.033 [2024-05-15 01:00:36.219097] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.033 [2024-05-15 01:00:36.219125] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.033 [2024-05-15 01:00:36.230656] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.033 [2024-05-15 01:00:36.230684] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.033 [2024-05-15 01:00:36.241921] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.033 [2024-05-15 01:00:36.241961] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.033 [2024-05-15 01:00:36.254427] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.033 [2024-05-15 01:00:36.254455] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.033 [2024-05-15 01:00:36.266622] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.033 [2024-05-15 01:00:36.266658] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.033 [2024-05-15 01:00:36.278432] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.033 [2024-05-15 01:00:36.278460] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.033 [2024-05-15 01:00:36.289861] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.033 [2024-05-15 01:00:36.289888] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.033 [2024-05-15 01:00:36.302284] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.033 [2024-05-15 01:00:36.302322] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.033 [2024-05-15 01:00:36.313675] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.033 [2024-05-15 01:00:36.313712] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.033 [2024-05-15 01:00:36.325505] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.033 [2024-05-15 01:00:36.325548] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.033 [2024-05-15 01:00:36.337154] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.033 [2024-05-15 01:00:36.337182] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.033 [2024-05-15 01:00:36.348468] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.033 [2024-05-15 01:00:36.348495] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.033 [2024-05-15 01:00:36.360335] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.033 [2024-05-15 01:00:36.360362] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.033 [2024-05-15 01:00:36.371733] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.033 [2024-05-15 01:00:36.371761] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.033 [2024-05-15 01:00:36.384051] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.033 [2024-05-15 01:00:36.384078] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.033 [2024-05-15 01:00:36.396187] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.033 [2024-05-15 01:00:36.396214] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.033 [2024-05-15 01:00:36.407991] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.033 [2024-05-15 01:00:36.408018] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.033 00:13:24.033 Latency(us) 00:13:24.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.033 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:24.033 Nvme1n1 : 5.01 10722.44 83.77 0.00 0.00 11922.42 5485.61 26408.58 00:13:24.033 =================================================================================================================== 00:13:24.033 Total : 10722.44 83.77 0.00 0.00 11922.42 5485.61 26408.58 00:13:24.033 [2024-05-15 01:00:36.415734] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.033 [2024-05-15 01:00:36.415763] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.293 [2024-05-15 01:00:36.423754] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.293 [2024-05-15 01:00:36.423784] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.293 [2024-05-15 01:00:36.431769] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.293 [2024-05-15 01:00:36.431797] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.293 [2024-05-15 01:00:36.439805] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.293 [2024-05-15 01:00:36.439838] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.293 [2024-05-15 01:00:36.447863] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.293 [2024-05-15 01:00:36.447912] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.293 [2024-05-15 01:00:36.455880] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.293 [2024-05-15 01:00:36.455924] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.293 [2024-05-15 01:00:36.463898] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.293 [2024-05-15 01:00:36.463951] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.293 [2024-05-15 01:00:36.471918] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.293 [2024-05-15 01:00:36.471980] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.293 [2024-05-15 01:00:36.479954] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.293 [2024-05-15 01:00:36.480001] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.293 [2024-05-15 01:00:36.487987] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.293 [2024-05-15 01:00:36.488028] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.293 [2024-05-15 01:00:36.495999] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.293 [2024-05-15 01:00:36.496044] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.293 [2024-05-15 01:00:36.504022] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.293 [2024-05-15 01:00:36.504067] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.293 [2024-05-15 01:00:36.512050] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.293 [2024-05-15 01:00:36.512096] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.293 [2024-05-15 01:00:36.520059] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.293 [2024-05-15 01:00:36.520104] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.293 [2024-05-15 01:00:36.528089] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.293 [2024-05-15 01:00:36.528141] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.293 [2024-05-15 01:00:36.536116] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.293 [2024-05-15 01:00:36.536157] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.293 [2024-05-15 01:00:36.544125] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.293 [2024-05-15 01:00:36.544168] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.293 [2024-05-15 01:00:36.552111] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.293 [2024-05-15 01:00:36.552147] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.293 [2024-05-15 01:00:36.560106] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.293 [2024-05-15 01:00:36.560127] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.293 [2024-05-15 01:00:36.568125] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.293 [2024-05-15 01:00:36.568145] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.293 [2024-05-15 01:00:36.576148] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.293 [2024-05-15 01:00:36.576168] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.293 [2024-05-15 01:00:36.584167] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.293 [2024-05-15 01:00:36.584187] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.293 [2024-05-15 01:00:36.592239] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.293 [2024-05-15 01:00:36.592279] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.294 [2024-05-15 01:00:36.600268] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.294 [2024-05-15 01:00:36.600311] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.294 [2024-05-15 01:00:36.608309] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.294 [2024-05-15 01:00:36.608353] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.294 [2024-05-15 01:00:36.616289] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.294 [2024-05-15 01:00:36.616314] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.294 [2024-05-15 01:00:36.624311] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.294 [2024-05-15 01:00:36.624347] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.294 [2024-05-15 01:00:36.632329] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.294 [2024-05-15 01:00:36.632354] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.294 [2024-05-15 01:00:36.640345] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.294 [2024-05-15 01:00:36.640369] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.294 [2024-05-15 01:00:36.648407] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.294 [2024-05-15 01:00:36.648446] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.294 [2024-05-15 01:00:36.656433] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.294 [2024-05-15 01:00:36.656477] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.294 [2024-05-15 01:00:36.664448] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.294 [2024-05-15 01:00:36.664485] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.294 [2024-05-15 01:00:36.672431] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.294 [2024-05-15 01:00:36.672455] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.294 [2024-05-15 01:00:36.680454] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.294 [2024-05-15 01:00:36.680478] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.554 [2024-05-15 01:00:36.688477] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.554 [2024-05-15 01:00:36.688502] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1235365) - No such process 00:13:24.554 01:00:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1235365 00:13:24.554 01:00:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.554 01:00:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.554 01:00:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:24.554 01:00:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.554 01:00:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:24.554 01:00:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.554 01:00:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:24.554 delay0 00:13:24.554 01:00:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.554 01:00:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:24.554 01:00:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.554 01:00:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:24.554 01:00:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.554 01:00:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:13:24.554 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.554 [2024-05-15 01:00:36.847135] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:31.126 Initializing NVMe Controllers 00:13:31.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:31.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:31.126 Initialization complete. Launching workers. 00:13:31.126 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 118 00:13:31.126 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 405, failed to submit 33 00:13:31.126 success 204, unsuccess 201, failed 0 00:13:31.126 01:00:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:31.126 01:00:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:31.126 01:00:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:31.126 01:00:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:13:31.126 01:00:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:31.126 01:00:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:13:31.126 01:00:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:31.126 01:00:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:31.126 rmmod nvme_tcp 00:13:31.126 rmmod nvme_fabrics 00:13:31.126 rmmod nvme_keyring 00:13:31.126 01:00:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:31.126 01:00:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:13:31.126 01:00:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:13:31.126 01:00:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1233901 ']' 00:13:31.126 01:00:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1233901 00:13:31.126 01:00:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 1233901 ']' 00:13:31.126 01:00:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 1233901 00:13:31.126 01:00:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:13:31.126 01:00:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:31.126 01:00:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1233901 00:13:31.126 01:00:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:31.126 01:00:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:31.126 01:00:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1233901' 00:13:31.126 killing process with pid 1233901 00:13:31.126 01:00:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 1233901 00:13:31.126 [2024-05-15 01:00:43.076827] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:31.126 01:00:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 1233901 00:13:31.126 01:00:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:31.126 01:00:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:31.126 01:00:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:31.126 01:00:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:31.126 01:00:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:31.126 01:00:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:31.126 01:00:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:31.126 01:00:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.036 01:00:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:33.036 00:13:33.036 real 0m29.069s 00:13:33.036 user 0m41.490s 00:13:33.036 sys 0m9.172s 00:13:33.036 01:00:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:33.036 01:00:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:33.036 ************************************ 00:13:33.036 END TEST nvmf_zcopy 00:13:33.036 ************************************ 00:13:33.036 01:00:45 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:33.036 01:00:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:33.036 01:00:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:33.036 01:00:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:33.293 ************************************ 00:13:33.293 START TEST nvmf_nmic 00:13:33.293 ************************************ 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:33.293 * Looking for test storage... 00:13:33.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:33.293 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:33.294 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.294 01:00:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:33.294 01:00:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.294 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:33.294 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:33.294 01:00:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:13:33.294 01:00:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:35.829 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:35.830 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:35.830 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:35.830 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:35.830 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:35.830 01:00:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:35.830 01:00:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:35.830 01:00:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:35.830 01:00:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:35.830 01:00:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:35.830 01:00:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:35.830 01:00:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:35.830 01:00:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:35.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:35.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:13:35.830 00:13:35.830 --- 10.0.0.2 ping statistics --- 00:13:35.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.830 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:13:35.830 01:00:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:35.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:35.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:13:35.830 00:13:35.830 --- 10.0.0.1 ping statistics --- 00:13:35.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.830 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:13:35.830 01:00:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:35.830 01:00:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:13:35.830 01:00:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:35.830 01:00:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:35.830 01:00:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:35.830 01:00:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:35.830 01:00:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:35.830 01:00:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:35.830 01:00:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:35.830 01:00:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:35.830 01:00:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:35.830 01:00:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:35.830 01:00:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:35.830 01:00:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1239041 00:13:35.830 01:00:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:35.830 01:00:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1239041 00:13:35.830 01:00:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 1239041 ']' 00:13:35.830 01:00:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.830 01:00:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:35.830 01:00:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.830 01:00:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:35.830 01:00:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:35.830 [2024-05-15 01:00:48.154745] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:13:35.830 [2024-05-15 01:00:48.154830] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.830 EAL: No free 2048 kB hugepages reported on node 1 00:13:36.089 [2024-05-15 01:00:48.237265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:36.089 [2024-05-15 01:00:48.357419] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.089 [2024-05-15 01:00:48.357477] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.089 [2024-05-15 01:00:48.357493] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:36.089 [2024-05-15 01:00:48.357506] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:36.089 [2024-05-15 01:00:48.357518] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.089 [2024-05-15 01:00:48.357643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.089 [2024-05-15 01:00:48.357888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.089 [2024-05-15 01:00:48.357964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:36.089 [2024-05-15 01:00:48.357983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:37.025 [2024-05-15 01:00:49.150956] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:37.025 Malloc0 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:37.025 [2024-05-15 01:00:49.201862] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:37.025 [2024-05-15 01:00:49.202186] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:37.025 test case1: single bdev can't be used in multiple subsystems 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:37.025 [2024-05-15 01:00:49.226012] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:37.025 [2024-05-15 01:00:49.226042] subsystem.c:2015:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:37.025 [2024-05-15 01:00:49.226058] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.025 request: 00:13:37.025 { 00:13:37.025 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:37.025 "namespace": { 00:13:37.025 "bdev_name": "Malloc0", 00:13:37.025 "no_auto_visible": false 00:13:37.025 }, 00:13:37.025 "method": "nvmf_subsystem_add_ns", 00:13:37.025 "req_id": 1 00:13:37.025 } 00:13:37.025 Got JSON-RPC error response 00:13:37.025 response: 00:13:37.025 { 00:13:37.025 "code": -32602, 00:13:37.025 "message": "Invalid parameters" 00:13:37.025 } 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:37.025 01:00:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:37.025 Adding namespace failed - expected result. 00:13:37.026 01:00:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:37.026 test case2: host connect to nvmf target in multiple paths 00:13:37.026 01:00:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:37.026 01:00:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.026 01:00:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:37.026 [2024-05-15 01:00:49.234117] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:37.026 01:00:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.026 01:00:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:37.594 01:00:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:38.161 01:00:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:38.161 01:00:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:13:38.161 01:00:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:38.161 01:00:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:38.161 01:00:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:13:40.695 01:00:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:40.695 01:00:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:40.695 01:00:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:40.695 01:00:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:40.695 01:00:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:40.695 01:00:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:13:40.695 01:00:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:40.695 [global] 00:13:40.695 thread=1 00:13:40.695 invalidate=1 00:13:40.695 rw=write 00:13:40.695 time_based=1 00:13:40.695 runtime=1 00:13:40.695 ioengine=libaio 00:13:40.695 direct=1 00:13:40.695 bs=4096 00:13:40.695 iodepth=1 00:13:40.695 norandommap=0 00:13:40.695 numjobs=1 00:13:40.695 00:13:40.695 verify_dump=1 00:13:40.695 verify_backlog=512 00:13:40.695 verify_state_save=0 00:13:40.695 do_verify=1 00:13:40.695 verify=crc32c-intel 00:13:40.695 [job0] 00:13:40.695 filename=/dev/nvme0n1 00:13:40.695 Could not set queue depth (nvme0n1) 00:13:40.695 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:40.695 fio-3.35 00:13:40.695 Starting 1 thread 00:13:41.626 00:13:41.626 job0: (groupid=0, jobs=1): err= 0: pid=1239686: Wed May 15 01:00:53 2024 00:13:41.626 read: IOPS=507, BW=2029KiB/s (2078kB/s)(2084KiB/1027msec) 00:13:41.626 slat (nsec): min=5620, max=73764, avg=22437.50, stdev=8949.69 00:13:41.626 clat (usec): min=356, max=42032, avg=1214.10, stdev=5408.96 00:13:41.626 lat (usec): min=362, max=42042, avg=1236.54, stdev=5408.76 00:13:41.626 clat percentiles (usec): 00:13:41.626 | 1.00th=[ 371], 5.00th=[ 388], 10.00th=[ 408], 20.00th=[ 441], 00:13:41.626 | 30.00th=[ 474], 40.00th=[ 490], 50.00th=[ 502], 60.00th=[ 519], 00:13:41.626 | 70.00th=[ 537], 80.00th=[ 562], 90.00th=[ 570], 95.00th=[ 586], 00:13:41.626 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:41.626 | 99.99th=[42206] 00:13:41.626 write: IOPS=997, BW=3988KiB/s (4084kB/s)(4096KiB/1027msec); 0 zone resets 00:13:41.626 slat (nsec): min=7381, max=89258, avg=22277.53, stdev=12740.34 00:13:41.626 clat (usec): min=211, max=572, avg=342.30, stdev=83.56 00:13:41.626 lat (usec): min=219, max=612, avg=364.58, stdev=89.02 00:13:41.626 clat percentiles (usec): 00:13:41.626 | 1.00th=[ 221], 5.00th=[ 227], 10.00th=[ 235], 20.00th=[ 247], 00:13:41.626 | 30.00th=[ 285], 40.00th=[ 322], 50.00th=[ 343], 60.00th=[ 363], 00:13:41.626 | 70.00th=[ 388], 80.00th=[ 424], 90.00th=[ 461], 95.00th=[ 486], 00:13:41.626 | 99.00th=[ 523], 99.50th=[ 529], 99.90th=[ 570], 99.95th=[ 570], 00:13:41.626 | 99.99th=[ 570] 00:13:41.626 bw ( KiB/s): min= 3520, max= 4672, per=100.00%, avg=4096.00, stdev=814.59, samples=2 00:13:41.627 iops : min= 880, max= 1168, avg=1024.00, stdev=203.65, samples=2 00:13:41.627 lat (usec) : 250=13.98%, 500=66.86%, 750=18.58% 00:13:41.627 lat (msec) : 50=0.58% 00:13:41.627 cpu : usr=2.92%, sys=3.90%, ctx=1545, majf=0, minf=2 00:13:41.627 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:41.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.627 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.627 issued rwts: total=521,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:41.627 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:41.627 00:13:41.627 Run status group 0 (all jobs): 00:13:41.627 READ: bw=2029KiB/s (2078kB/s), 2029KiB/s-2029KiB/s (2078kB/s-2078kB/s), io=2084KiB (2134kB), run=1027-1027msec 00:13:41.627 WRITE: bw=3988KiB/s (4084kB/s), 3988KiB/s-3988KiB/s (4084kB/s-4084kB/s), io=4096KiB (4194kB), run=1027-1027msec 00:13:41.627 00:13:41.627 Disk stats (read/write): 00:13:41.627 nvme0n1: ios=567/1024, merge=0/0, ticks=512/322, in_queue=834, util=92.89% 00:13:41.627 01:00:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:41.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:41.887 01:00:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:41.887 01:00:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:13:41.887 01:00:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:41.887 01:00:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:41.887 01:00:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:41.887 01:00:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:41.887 01:00:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:13:41.887 01:00:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:41.887 01:00:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:41.887 01:00:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:41.887 01:00:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:13:41.887 01:00:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:41.887 01:00:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:13:41.887 01:00:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:41.887 01:00:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:41.887 rmmod nvme_tcp 00:13:41.887 rmmod nvme_fabrics 00:13:41.887 rmmod nvme_keyring 00:13:41.887 01:00:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:41.887 01:00:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:13:41.887 01:00:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:13:41.887 01:00:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1239041 ']' 00:13:41.887 01:00:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1239041 00:13:41.887 01:00:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 1239041 ']' 00:13:41.887 01:00:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 1239041 00:13:41.887 01:00:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:13:41.887 01:00:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:41.887 01:00:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1239041 00:13:41.887 01:00:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:41.887 01:00:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:41.887 01:00:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1239041' 00:13:41.887 killing process with pid 1239041 00:13:41.887 01:00:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 1239041 00:13:41.887 [2024-05-15 01:00:54.132128] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:41.887 01:00:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 1239041 00:13:42.145 01:00:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:42.145 01:00:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:42.145 01:00:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:42.145 01:00:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:42.145 01:00:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:42.145 01:00:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.145 01:00:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:42.145 01:00:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.698 01:00:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:44.698 00:13:44.698 real 0m11.049s 00:13:44.698 user 0m25.174s 00:13:44.698 sys 0m2.761s 00:13:44.698 01:00:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:44.698 01:00:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:44.698 ************************************ 00:13:44.698 END TEST nvmf_nmic 00:13:44.698 ************************************ 00:13:44.698 01:00:56 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:44.698 01:00:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:44.698 01:00:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:44.698 01:00:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:44.698 ************************************ 00:13:44.698 START TEST nvmf_fio_target 00:13:44.698 ************************************ 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:44.698 * Looking for test storage... 00:13:44.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:44.698 01:00:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.232 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:47.232 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:47.232 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:47.232 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:47.232 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:47.232 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:47.232 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:47.232 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:47.232 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:47.232 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:47.233 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:47.233 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:47.233 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:47.233 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:47.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:47.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:13:47.233 00:13:47.233 --- 10.0.0.2 ping statistics --- 00:13:47.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.233 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:47.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:47.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:13:47.233 00:13:47.233 --- 10.0.0.1 ping statistics --- 00:13:47.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.233 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1242167 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1242167 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 1242167 ']' 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:47.233 01:00:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.233 [2024-05-15 01:00:59.335775] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:13:47.233 [2024-05-15 01:00:59.335865] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.233 EAL: No free 2048 kB hugepages reported on node 1 00:13:47.233 [2024-05-15 01:00:59.418084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:47.233 [2024-05-15 01:00:59.541961] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.233 [2024-05-15 01:00:59.542027] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.233 [2024-05-15 01:00:59.542043] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:47.233 [2024-05-15 01:00:59.542057] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:47.233 [2024-05-15 01:00:59.542068] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.233 [2024-05-15 01:00:59.542128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.233 [2024-05-15 01:00:59.542183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:47.233 [2024-05-15 01:00:59.542210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:47.233 [2024-05-15 01:00:59.542213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.490 01:00:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:47.490 01:00:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:13:47.490 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:47.490 01:00:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:47.490 01:00:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.490 01:00:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.490 01:00:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:47.748 [2024-05-15 01:00:59.926538] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.748 01:00:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:48.006 01:01:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:48.006 01:01:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:48.262 01:01:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:48.262 01:01:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:48.519 01:01:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:48.519 01:01:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:48.776 01:01:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:48.776 01:01:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:49.035 01:01:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:49.293 01:01:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:49.293 01:01:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:49.550 01:01:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:49.550 01:01:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:49.808 01:01:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:49.808 01:01:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:50.065 01:01:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:50.323 01:01:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:50.323 01:01:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:50.581 01:01:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:50.581 01:01:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:50.847 01:01:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.847 [2024-05-15 01:01:03.229789] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:50.847 [2024-05-15 01:01:03.230113] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:51.107 01:01:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:51.107 01:01:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:51.364 01:01:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:52.297 01:01:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:52.297 01:01:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:13:52.297 01:01:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:52.297 01:01:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:13:52.298 01:01:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:13:52.298 01:01:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:13:54.204 01:01:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:54.204 01:01:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:54.204 01:01:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:54.204 01:01:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:13:54.204 01:01:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:54.204 01:01:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:13:54.204 01:01:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:54.204 [global] 00:13:54.204 thread=1 00:13:54.204 invalidate=1 00:13:54.204 rw=write 00:13:54.204 time_based=1 00:13:54.204 runtime=1 00:13:54.204 ioengine=libaio 00:13:54.204 direct=1 00:13:54.204 bs=4096 00:13:54.204 iodepth=1 00:13:54.204 norandommap=0 00:13:54.204 numjobs=1 00:13:54.204 00:13:54.204 verify_dump=1 00:13:54.204 verify_backlog=512 00:13:54.204 verify_state_save=0 00:13:54.204 do_verify=1 00:13:54.204 verify=crc32c-intel 00:13:54.204 [job0] 00:13:54.204 filename=/dev/nvme0n1 00:13:54.204 [job1] 00:13:54.204 filename=/dev/nvme0n2 00:13:54.204 [job2] 00:13:54.204 filename=/dev/nvme0n3 00:13:54.204 [job3] 00:13:54.204 filename=/dev/nvme0n4 00:13:54.204 Could not set queue depth (nvme0n1) 00:13:54.204 Could not set queue depth (nvme0n2) 00:13:54.204 Could not set queue depth (nvme0n3) 00:13:54.204 Could not set queue depth (nvme0n4) 00:13:54.462 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:54.462 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:54.462 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:54.462 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:54.462 fio-3.35 00:13:54.462 Starting 4 threads 00:13:55.846 00:13:55.846 job0: (groupid=0, jobs=1): err= 0: pid=1243128: Wed May 15 01:01:07 2024 00:13:55.846 read: IOPS=1128, BW=4515KiB/s (4624kB/s)(4520KiB/1001msec) 00:13:55.846 slat (nsec): min=6411, max=71074, avg=18217.24, stdev=7605.06 00:13:55.846 clat (usec): min=431, max=640, avg=502.22, stdev=30.25 00:13:55.846 lat (usec): min=446, max=658, avg=520.44, stdev=31.49 00:13:55.846 clat percentiles (usec): 00:13:55.846 | 1.00th=[ 445], 5.00th=[ 461], 10.00th=[ 469], 20.00th=[ 478], 00:13:55.846 | 30.00th=[ 486], 40.00th=[ 490], 50.00th=[ 498], 60.00th=[ 506], 00:13:55.846 | 70.00th=[ 515], 80.00th=[ 529], 90.00th=[ 545], 95.00th=[ 562], 00:13:55.846 | 99.00th=[ 594], 99.50th=[ 611], 99.90th=[ 635], 99.95th=[ 644], 00:13:55.846 | 99.99th=[ 644] 00:13:55.846 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:55.846 slat (nsec): min=6059, max=60384, avg=13750.10, stdev=6696.91 00:13:55.846 clat (usec): min=199, max=3866, avg=246.87, stdev=118.57 00:13:55.846 lat (usec): min=206, max=3890, avg=260.62, stdev=119.86 00:13:55.846 clat percentiles (usec): 00:13:55.846 | 1.00th=[ 204], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 217], 00:13:55.846 | 30.00th=[ 221], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 233], 00:13:55.846 | 70.00th=[ 245], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 310], 00:13:55.846 | 99.00th=[ 486], 99.50th=[ 652], 99.90th=[ 2245], 99.95th=[ 3851], 00:13:55.846 | 99.99th=[ 3851] 00:13:55.846 bw ( KiB/s): min= 6656, max= 6656, per=40.87%, avg=6656.00, stdev= 0.00, samples=1 00:13:55.846 iops : min= 1664, max= 1664, avg=1664.00, stdev= 0.00, samples=1 00:13:55.846 lat (usec) : 250=41.60%, 500=38.18%, 750=19.99%, 1000=0.11% 00:13:55.846 lat (msec) : 2=0.04%, 4=0.08% 00:13:55.846 cpu : usr=2.50%, sys=4.30%, ctx=2667, majf=0, minf=2 00:13:55.846 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:55.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.846 issued rwts: total=1130,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.846 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:55.846 job1: (groupid=0, jobs=1): err= 0: pid=1243129: Wed May 15 01:01:07 2024 00:13:55.846 read: IOPS=18, BW=74.6KiB/s (76.4kB/s)(76.0KiB/1019msec) 00:13:55.846 slat (nsec): min=13318, max=32774, avg=18048.37, stdev=6680.34 00:13:55.846 clat (usec): min=40806, max=41929, avg=41354.21, stdev=361.55 00:13:55.846 lat (usec): min=40820, max=41962, avg=41372.26, stdev=362.70 00:13:55.846 clat percentiles (usec): 00:13:55.846 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:55.846 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:13:55.846 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:13:55.846 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:13:55.846 | 99.99th=[41681] 00:13:55.846 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:13:55.846 slat (nsec): min=7531, max=67444, avg=23067.18, stdev=9376.91 00:13:55.846 clat (usec): min=221, max=7080, avg=426.42, stdev=364.67 00:13:55.846 lat (usec): min=239, max=7101, avg=449.48, stdev=364.34 00:13:55.846 clat percentiles (usec): 00:13:55.846 | 1.00th=[ 231], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 269], 00:13:55.846 | 30.00th=[ 285], 40.00th=[ 306], 50.00th=[ 343], 60.00th=[ 429], 00:13:55.846 | 70.00th=[ 482], 80.00th=[ 519], 90.00th=[ 594], 95.00th=[ 766], 00:13:55.846 | 99.00th=[ 1020], 99.50th=[ 2343], 99.90th=[ 7111], 99.95th=[ 7111], 00:13:55.846 | 99.99th=[ 7111] 00:13:55.846 bw ( KiB/s): min= 4096, max= 4096, per=25.15%, avg=4096.00, stdev= 0.00, samples=1 00:13:55.846 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:55.846 lat (usec) : 250=7.72%, 500=65.35%, 750=18.27%, 1000=3.77% 00:13:55.846 lat (msec) : 2=0.75%, 4=0.38%, 10=0.19%, 50=3.58% 00:13:55.846 cpu : usr=0.88%, sys=0.79%, ctx=531, majf=0, minf=1 00:13:55.846 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:55.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.846 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.846 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:55.846 job2: (groupid=0, jobs=1): err= 0: pid=1243130: Wed May 15 01:01:07 2024 00:13:55.846 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:13:55.846 slat (nsec): min=6752, max=66498, avg=19479.96, stdev=8609.17 00:13:55.846 clat (usec): min=451, max=789, avg=588.35, stdev=70.22 00:13:55.846 lat (usec): min=467, max=803, avg=607.83, stdev=72.33 00:13:55.846 clat percentiles (usec): 00:13:55.846 | 1.00th=[ 469], 5.00th=[ 486], 10.00th=[ 498], 20.00th=[ 515], 00:13:55.846 | 30.00th=[ 537], 40.00th=[ 562], 50.00th=[ 586], 60.00th=[ 611], 00:13:55.846 | 70.00th=[ 635], 80.00th=[ 660], 90.00th=[ 685], 95.00th=[ 709], 00:13:55.846 | 99.00th=[ 750], 99.50th=[ 750], 99.90th=[ 775], 99.95th=[ 791], 00:13:55.846 | 99.99th=[ 791] 00:13:55.846 write: IOPS=1075, BW=4304KiB/s (4407kB/s)(4308KiB/1001msec); 0 zone resets 00:13:55.846 slat (nsec): min=6622, max=71209, avg=18292.97, stdev=10178.70 00:13:55.846 clat (usec): min=216, max=1469, avg=322.37, stdev=85.87 00:13:55.846 lat (usec): min=224, max=1490, avg=340.66, stdev=89.47 00:13:55.846 clat percentiles (usec): 00:13:55.846 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 258], 00:13:55.846 | 30.00th=[ 273], 40.00th=[ 289], 50.00th=[ 310], 60.00th=[ 326], 00:13:55.846 | 70.00th=[ 355], 80.00th=[ 379], 90.00th=[ 408], 95.00th=[ 441], 00:13:55.846 | 99.00th=[ 562], 99.50th=[ 701], 99.90th=[ 1074], 99.95th=[ 1467], 00:13:55.846 | 99.99th=[ 1467] 00:13:55.846 bw ( KiB/s): min= 4128, max= 4128, per=25.35%, avg=4128.00, stdev= 0.00, samples=1 00:13:55.846 iops : min= 1032, max= 1032, avg=1032.00, stdev= 0.00, samples=1 00:13:55.846 lat (usec) : 250=9.00%, 500=46.74%, 750=43.69%, 1000=0.48% 00:13:55.846 lat (msec) : 2=0.10% 00:13:55.846 cpu : usr=1.80%, sys=4.40%, ctx=2103, majf=0, minf=1 00:13:55.846 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:55.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.846 issued rwts: total=1024,1077,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.846 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:55.846 job3: (groupid=0, jobs=1): err= 0: pid=1243131: Wed May 15 01:01:07 2024 00:13:55.846 read: IOPS=664, BW=2657KiB/s (2721kB/s)(2660KiB/1001msec) 00:13:55.846 slat (nsec): min=6436, max=68715, avg=18264.63, stdev=8661.99 00:13:55.846 clat (usec): min=455, max=1204, avg=668.00, stdev=142.47 00:13:55.846 lat (usec): min=472, max=1218, avg=686.27, stdev=142.84 00:13:55.846 clat percentiles (usec): 00:13:55.846 | 1.00th=[ 469], 5.00th=[ 486], 10.00th=[ 502], 20.00th=[ 523], 00:13:55.846 | 30.00th=[ 570], 40.00th=[ 611], 50.00th=[ 644], 60.00th=[ 693], 00:13:55.846 | 70.00th=[ 734], 80.00th=[ 807], 90.00th=[ 881], 95.00th=[ 914], 00:13:55.846 | 99.00th=[ 1004], 99.50th=[ 1020], 99.90th=[ 1205], 99.95th=[ 1205], 00:13:55.846 | 99.99th=[ 1205] 00:13:55.846 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:13:55.846 slat (nsec): min=8504, max=64791, avg=19982.63, stdev=8995.29 00:13:55.846 clat (usec): min=235, max=8315, avg=502.60, stdev=289.01 00:13:55.846 lat (usec): min=246, max=8348, avg=522.58, stdev=289.14 00:13:55.846 clat percentiles (usec): 00:13:55.846 | 1.00th=[ 253], 5.00th=[ 281], 10.00th=[ 310], 20.00th=[ 379], 00:13:55.846 | 30.00th=[ 441], 40.00th=[ 469], 50.00th=[ 490], 60.00th=[ 510], 00:13:55.846 | 70.00th=[ 537], 80.00th=[ 570], 90.00th=[ 652], 95.00th=[ 734], 00:13:55.846 | 99.00th=[ 979], 99.50th=[ 1188], 99.90th=[ 2212], 99.95th=[ 8291], 00:13:55.846 | 99.99th=[ 8291] 00:13:55.846 bw ( KiB/s): min= 4096, max= 4096, per=25.15%, avg=4096.00, stdev= 0.00, samples=1 00:13:55.846 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:55.846 lat (usec) : 250=0.47%, 500=36.83%, 750=49.38%, 1000=12.31% 00:13:55.846 lat (msec) : 2=0.89%, 4=0.06%, 10=0.06% 00:13:55.846 cpu : usr=2.70%, sys=3.80%, ctx=1689, majf=0, minf=1 00:13:55.846 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:55.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.847 issued rwts: total=665,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.847 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:55.847 00:13:55.847 Run status group 0 (all jobs): 00:13:55.847 READ: bw=10.9MiB/s (11.4MB/s), 74.6KiB/s-4515KiB/s (76.4kB/s-4624kB/s), io=11.1MiB (11.6MB), run=1001-1019msec 00:13:55.847 WRITE: bw=15.9MiB/s (16.7MB/s), 2010KiB/s-6138KiB/s (2058kB/s-6285kB/s), io=16.2MiB (17.0MB), run=1001-1019msec 00:13:55.847 00:13:55.847 Disk stats (read/write): 00:13:55.847 nvme0n1: ios=1076/1149, merge=0/0, ticks=764/278, in_queue=1042, util=97.49% 00:13:55.847 nvme0n2: ios=42/512, merge=0/0, ticks=624/207, in_queue=831, util=87.56% 00:13:55.847 nvme0n3: ios=868/1024, merge=0/0, ticks=765/316, in_queue=1081, util=98.22% 00:13:55.847 nvme0n4: ios=512/957, merge=0/0, ticks=338/466, in_queue=804, util=89.63% 00:13:55.847 01:01:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:55.847 [global] 00:13:55.847 thread=1 00:13:55.847 invalidate=1 00:13:55.847 rw=randwrite 00:13:55.847 time_based=1 00:13:55.847 runtime=1 00:13:55.847 ioengine=libaio 00:13:55.847 direct=1 00:13:55.847 bs=4096 00:13:55.847 iodepth=1 00:13:55.847 norandommap=0 00:13:55.847 numjobs=1 00:13:55.847 00:13:55.847 verify_dump=1 00:13:55.847 verify_backlog=512 00:13:55.847 verify_state_save=0 00:13:55.847 do_verify=1 00:13:55.847 verify=crc32c-intel 00:13:55.847 [job0] 00:13:55.847 filename=/dev/nvme0n1 00:13:55.847 [job1] 00:13:55.847 filename=/dev/nvme0n2 00:13:55.847 [job2] 00:13:55.847 filename=/dev/nvme0n3 00:13:55.847 [job3] 00:13:55.847 filename=/dev/nvme0n4 00:13:55.847 Could not set queue depth (nvme0n1) 00:13:55.847 Could not set queue depth (nvme0n2) 00:13:55.847 Could not set queue depth (nvme0n3) 00:13:55.847 Could not set queue depth (nvme0n4) 00:13:55.847 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:55.847 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:55.847 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:55.847 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:55.847 fio-3.35 00:13:55.847 Starting 4 threads 00:13:57.223 00:13:57.223 job0: (groupid=0, jobs=1): err= 0: pid=1243409: Wed May 15 01:01:09 2024 00:13:57.223 read: IOPS=309, BW=1237KiB/s (1267kB/s)(1268KiB/1025msec) 00:13:57.223 slat (nsec): min=5817, max=46788, avg=13648.72, stdev=4461.40 00:13:57.223 clat (usec): min=326, max=42541, avg=2724.44, stdev=9428.15 00:13:57.223 lat (usec): min=333, max=42560, avg=2738.09, stdev=9429.84 00:13:57.223 clat percentiles (usec): 00:13:57.223 | 1.00th=[ 351], 5.00th=[ 367], 10.00th=[ 375], 20.00th=[ 388], 00:13:57.223 | 30.00th=[ 396], 40.00th=[ 404], 50.00th=[ 408], 60.00th=[ 416], 00:13:57.223 | 70.00th=[ 424], 80.00th=[ 445], 90.00th=[ 519], 95.00th=[41157], 00:13:57.223 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:13:57.223 | 99.99th=[42730] 00:13:57.223 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:13:57.223 slat (nsec): min=6526, max=55740, avg=18410.84, stdev=8346.18 00:13:57.223 clat (usec): min=215, max=490, avg=279.15, stdev=49.09 00:13:57.223 lat (usec): min=226, max=509, avg=297.56, stdev=50.95 00:13:57.223 clat percentiles (usec): 00:13:57.223 | 1.00th=[ 225], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 243], 00:13:57.223 | 30.00th=[ 247], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 269], 00:13:57.223 | 70.00th=[ 285], 80.00th=[ 318], 90.00th=[ 359], 95.00th=[ 392], 00:13:57.223 | 99.00th=[ 424], 99.50th=[ 429], 99.90th=[ 490], 99.95th=[ 490], 00:13:57.223 | 99.99th=[ 490] 00:13:57.223 bw ( KiB/s): min= 4096, max= 4096, per=34.77%, avg=4096.00, stdev= 0.00, samples=1 00:13:57.223 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:57.223 lat (usec) : 250=21.23%, 500=74.43%, 750=2.17% 00:13:57.223 lat (msec) : 50=2.17% 00:13:57.223 cpu : usr=0.98%, sys=1.07%, ctx=830, majf=0, minf=2 00:13:57.223 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:57.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.224 issued rwts: total=317,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:57.224 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:57.224 job1: (groupid=0, jobs=1): err= 0: pid=1243424: Wed May 15 01:01:09 2024 00:13:57.224 read: IOPS=22, BW=91.6KiB/s (93.8kB/s)(92.0KiB/1004msec) 00:13:57.224 slat (nsec): min=10808, max=31710, avg=16132.96, stdev=3707.57 00:13:57.224 clat (usec): min=516, max=41428, avg=33994.11, stdev=15692.92 00:13:57.224 lat (usec): min=531, max=41443, avg=34010.24, stdev=15693.31 00:13:57.224 clat percentiles (usec): 00:13:57.224 | 1.00th=[ 519], 5.00th=[ 545], 10.00th=[ 545], 20.00th=[40633], 00:13:57.224 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:57.224 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:57.224 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:13:57.224 | 99.99th=[41681] 00:13:57.224 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:13:57.224 slat (nsec): min=6993, max=65864, avg=24694.98, stdev=12086.49 00:13:57.224 clat (usec): min=211, max=2140, avg=400.83, stdev=265.02 00:13:57.224 lat (usec): min=220, max=2180, avg=425.53, stdev=267.46 00:13:57.224 clat percentiles (usec): 00:13:57.224 | 1.00th=[ 215], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 241], 00:13:57.224 | 30.00th=[ 265], 40.00th=[ 289], 50.00th=[ 326], 60.00th=[ 379], 00:13:57.224 | 70.00th=[ 408], 80.00th=[ 449], 90.00th=[ 553], 95.00th=[ 947], 00:13:57.224 | 99.00th=[ 1614], 99.50th=[ 1893], 99.90th=[ 2147], 99.95th=[ 2147], 00:13:57.224 | 99.99th=[ 2147] 00:13:57.224 bw ( KiB/s): min= 4096, max= 4096, per=34.77%, avg=4096.00, stdev= 0.00, samples=1 00:13:57.224 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:57.224 lat (usec) : 250=23.36%, 500=59.07%, 750=7.29%, 1000=2.62% 00:13:57.224 lat (msec) : 2=3.74%, 4=0.37%, 50=3.55% 00:13:57.224 cpu : usr=0.90%, sys=1.00%, ctx=536, majf=0, minf=1 00:13:57.224 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:57.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.224 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:57.224 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:57.224 job2: (groupid=0, jobs=1): err= 0: pid=1243464: Wed May 15 01:01:09 2024 00:13:57.224 read: IOPS=249, BW=996KiB/s (1020kB/s)(1020KiB/1024msec) 00:13:57.224 slat (nsec): min=5946, max=48780, avg=15308.21, stdev=4991.42 00:13:57.224 clat (usec): min=372, max=41957, avg=3149.83, stdev=10165.09 00:13:57.224 lat (usec): min=387, max=41972, avg=3165.13, stdev=10165.90 00:13:57.224 clat percentiles (usec): 00:13:57.224 | 1.00th=[ 379], 5.00th=[ 392], 10.00th=[ 408], 20.00th=[ 416], 00:13:57.224 | 30.00th=[ 420], 40.00th=[ 424], 50.00th=[ 429], 60.00th=[ 433], 00:13:57.224 | 70.00th=[ 437], 80.00th=[ 445], 90.00th=[ 494], 95.00th=[41157], 00:13:57.224 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:13:57.224 | 99.99th=[42206] 00:13:57.224 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:13:57.224 slat (nsec): min=7501, max=75545, avg=25905.93, stdev=12962.26 00:13:57.224 clat (usec): min=217, max=934, avg=387.50, stdev=76.10 00:13:57.224 lat (usec): min=235, max=973, avg=413.40, stdev=80.17 00:13:57.224 clat percentiles (usec): 00:13:57.224 | 1.00th=[ 245], 5.00th=[ 273], 10.00th=[ 297], 20.00th=[ 330], 00:13:57.224 | 30.00th=[ 347], 40.00th=[ 371], 50.00th=[ 392], 60.00th=[ 404], 00:13:57.224 | 70.00th=[ 420], 80.00th=[ 437], 90.00th=[ 465], 95.00th=[ 494], 00:13:57.224 | 99.00th=[ 611], 99.50th=[ 701], 99.90th=[ 938], 99.95th=[ 938], 00:13:57.224 | 99.99th=[ 938] 00:13:57.224 bw ( KiB/s): min= 4096, max= 4096, per=34.77%, avg=4096.00, stdev= 0.00, samples=1 00:13:57.224 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:57.224 lat (usec) : 250=1.17%, 500=93.09%, 750=2.87%, 1000=0.26% 00:13:57.224 lat (msec) : 2=0.39%, 50=2.22% 00:13:57.224 cpu : usr=0.98%, sys=1.56%, ctx=767, majf=0, minf=1 00:13:57.224 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:57.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.224 issued rwts: total=255,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:57.224 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:57.224 job3: (groupid=0, jobs=1): err= 0: pid=1243476: Wed May 15 01:01:09 2024 00:13:57.224 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:13:57.224 slat (nsec): min=7839, max=60210, avg=16126.00, stdev=6533.86 00:13:57.224 clat (usec): min=388, max=849, avg=468.70, stdev=61.99 00:13:57.224 lat (usec): min=397, max=863, avg=484.83, stdev=64.80 00:13:57.224 clat percentiles (usec): 00:13:57.224 | 1.00th=[ 396], 5.00th=[ 400], 10.00th=[ 404], 20.00th=[ 412], 00:13:57.224 | 30.00th=[ 420], 40.00th=[ 433], 50.00th=[ 445], 60.00th=[ 474], 00:13:57.224 | 70.00th=[ 515], 80.00th=[ 537], 90.00th=[ 553], 95.00th=[ 570], 00:13:57.224 | 99.00th=[ 627], 99.50th=[ 652], 99.90th=[ 668], 99.95th=[ 848], 00:13:57.224 | 99.99th=[ 848] 00:13:57.224 write: IOPS=1481, BW=5926KiB/s (6068kB/s)(5932KiB/1001msec); 0 zone resets 00:13:57.224 slat (nsec): min=8013, max=67225, avg=18802.81, stdev=8421.38 00:13:57.224 clat (usec): min=230, max=504, avg=312.51, stdev=38.11 00:13:57.224 lat (usec): min=240, max=543, avg=331.32, stdev=41.77 00:13:57.224 clat percentiles (usec): 00:13:57.224 | 1.00th=[ 249], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 281], 00:13:57.224 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 306], 60.00th=[ 314], 00:13:57.224 | 70.00th=[ 322], 80.00th=[ 343], 90.00th=[ 363], 95.00th=[ 383], 00:13:57.224 | 99.00th=[ 433], 99.50th=[ 465], 99.90th=[ 498], 99.95th=[ 506], 00:13:57.224 | 99.99th=[ 506] 00:13:57.224 bw ( KiB/s): min= 5112, max= 5112, per=43.39%, avg=5112.00, stdev= 0.00, samples=1 00:13:57.224 iops : min= 1278, max= 1278, avg=1278.00, stdev= 0.00, samples=1 00:13:57.224 lat (usec) : 250=0.64%, 500=85.56%, 750=13.76%, 1000=0.04% 00:13:57.224 cpu : usr=3.00%, sys=5.90%, ctx=2508, majf=0, minf=1 00:13:57.224 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:57.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.224 issued rwts: total=1024,1483,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:57.224 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:57.224 00:13:57.224 Run status group 0 (all jobs): 00:13:57.224 READ: bw=6318KiB/s (6470kB/s), 91.6KiB/s-4092KiB/s (93.8kB/s-4190kB/s), io=6476KiB (6631kB), run=1001-1025msec 00:13:57.224 WRITE: bw=11.5MiB/s (12.1MB/s), 1998KiB/s-5926KiB/s (2046kB/s-6068kB/s), io=11.8MiB (12.4MB), run=1001-1025msec 00:13:57.224 00:13:57.224 Disk stats (read/write): 00:13:57.224 nvme0n1: ios=360/512, merge=0/0, ticks=1021/134, in_queue=1155, util=97.49% 00:13:57.224 nvme0n2: ios=64/512, merge=0/0, ticks=1543/184, in_queue=1727, util=99.08% 00:13:57.224 nvme0n3: ios=116/512, merge=0/0, ticks=625/173, in_queue=798, util=88.78% 00:13:57.224 nvme0n4: ios=1038/1024, merge=0/0, ticks=1407/317, in_queue=1724, util=97.57% 00:13:57.224 01:01:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:57.224 [global] 00:13:57.224 thread=1 00:13:57.224 invalidate=1 00:13:57.224 rw=write 00:13:57.224 time_based=1 00:13:57.224 runtime=1 00:13:57.224 ioengine=libaio 00:13:57.224 direct=1 00:13:57.224 bs=4096 00:13:57.224 iodepth=128 00:13:57.224 norandommap=0 00:13:57.224 numjobs=1 00:13:57.224 00:13:57.224 verify_dump=1 00:13:57.224 verify_backlog=512 00:13:57.224 verify_state_save=0 00:13:57.224 do_verify=1 00:13:57.224 verify=crc32c-intel 00:13:57.224 [job0] 00:13:57.224 filename=/dev/nvme0n1 00:13:57.224 [job1] 00:13:57.224 filename=/dev/nvme0n2 00:13:57.224 [job2] 00:13:57.224 filename=/dev/nvme0n3 00:13:57.224 [job3] 00:13:57.224 filename=/dev/nvme0n4 00:13:57.224 Could not set queue depth (nvme0n1) 00:13:57.224 Could not set queue depth (nvme0n2) 00:13:57.224 Could not set queue depth (nvme0n3) 00:13:57.224 Could not set queue depth (nvme0n4) 00:13:57.224 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:57.224 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:57.224 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:57.224 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:57.224 fio-3.35 00:13:57.224 Starting 4 threads 00:13:58.610 00:13:58.610 job0: (groupid=0, jobs=1): err= 0: pid=1243708: Wed May 15 01:01:10 2024 00:13:58.610 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:13:58.610 slat (usec): min=3, max=14543, avg=85.09, stdev=560.39 00:13:58.610 clat (usec): min=745, max=32979, avg=11582.32, stdev=2891.53 00:13:58.610 lat (usec): min=761, max=33015, avg=11667.40, stdev=2911.82 00:13:58.610 clat percentiles (usec): 00:13:58.610 | 1.00th=[ 6783], 5.00th=[ 7963], 10.00th=[ 8586], 20.00th=[ 9503], 00:13:58.610 | 30.00th=[10028], 40.00th=[10552], 50.00th=[10945], 60.00th=[11863], 00:13:58.610 | 70.00th=[12518], 80.00th=[13698], 90.00th=[15270], 95.00th=[17957], 00:13:58.610 | 99.00th=[20055], 99.50th=[20317], 99.90th=[23200], 99.95th=[27395], 00:13:58.610 | 99.99th=[32900] 00:13:58.610 write: IOPS=5736, BW=22.4MiB/s (23.5MB/s)(22.5MiB/1003msec); 0 zone resets 00:13:58.610 slat (usec): min=4, max=12440, avg=79.56, stdev=473.03 00:13:58.610 clat (usec): min=386, max=26209, avg=10714.11, stdev=2914.82 00:13:58.610 lat (usec): min=3057, max=26228, avg=10793.66, stdev=2938.28 00:13:58.610 clat percentiles (usec): 00:13:58.610 | 1.00th=[ 4359], 5.00th=[ 6194], 10.00th=[ 7832], 20.00th=[ 8979], 00:13:58.610 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10290], 60.00th=[10683], 00:13:58.610 | 70.00th=[11207], 80.00th=[12125], 90.00th=[14484], 95.00th=[15795], 00:13:58.610 | 99.00th=[21890], 99.50th=[21890], 99.90th=[23200], 99.95th=[23200], 00:13:58.610 | 99.99th=[26084] 00:13:58.610 bw ( KiB/s): min=20736, max=24544, per=35.10%, avg=22640.00, stdev=2692.66, samples=2 00:13:58.610 iops : min= 5184, max= 6136, avg=5660.00, stdev=673.17, samples=2 00:13:58.610 lat (usec) : 500=0.01%, 750=0.01% 00:13:58.610 lat (msec) : 2=0.39%, 4=0.45%, 10=33.21%, 20=64.77%, 50=1.17% 00:13:58.610 cpu : usr=7.88%, sys=10.38%, ctx=549, majf=0, minf=1 00:13:58.610 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:13:58.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:58.610 issued rwts: total=5632,5754,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.610 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:58.610 job1: (groupid=0, jobs=1): err= 0: pid=1243709: Wed May 15 01:01:10 2024 00:13:58.610 read: IOPS=2654, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1002msec) 00:13:58.610 slat (usec): min=3, max=37897, avg=142.08, stdev=1278.35 00:13:58.610 clat (usec): min=655, max=61315, avg=17585.52, stdev=12925.33 00:13:58.610 lat (usec): min=671, max=61321, avg=17727.60, stdev=12995.54 00:13:58.610 clat percentiles (usec): 00:13:58.610 | 1.00th=[ 1205], 5.00th=[ 4080], 10.00th=[ 7504], 20.00th=[11207], 00:13:58.610 | 30.00th=[12649], 40.00th=[13304], 50.00th=[13960], 60.00th=[14222], 00:13:58.610 | 70.00th=[15139], 80.00th=[17171], 90.00th=[47973], 95.00th=[50070], 00:13:58.610 | 99.00th=[61080], 99.50th=[61080], 99.90th=[61080], 99.95th=[61080], 00:13:58.610 | 99.99th=[61080] 00:13:58.610 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:13:58.610 slat (usec): min=4, max=63591, avg=188.35, stdev=1979.00 00:13:58.610 clat (usec): min=1346, max=244502, avg=26152.91, stdev=41848.10 00:13:58.610 lat (msec): min=2, max=244, avg=26.34, stdev=42.07 00:13:58.610 clat percentiles (msec): 00:13:58.610 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:13:58.610 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 13], 00:13:58.610 | 70.00th=[ 17], 80.00th=[ 21], 90.00th=[ 52], 95.00th=[ 97], 00:13:58.610 | 99.00th=[ 232], 99.50th=[ 236], 99.90th=[ 245], 99.95th=[ 245], 00:13:58.610 | 99.99th=[ 245] 00:13:58.610 bw ( KiB/s): min= 6576, max=17776, per=18.88%, avg=12176.00, stdev=7919.60, samples=2 00:13:58.610 iops : min= 1644, max= 4444, avg=3044.00, stdev=1979.90, samples=2 00:13:58.610 lat (usec) : 750=0.03% 00:13:58.610 lat (msec) : 2=0.92%, 4=1.33%, 10=8.97%, 20=69.19%, 50=10.54% 00:13:58.610 lat (msec) : 100=6.52%, 250=2.49% 00:13:58.610 cpu : usr=3.20%, sys=4.40%, ctx=281, majf=0, minf=1 00:13:58.610 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:13:58.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:58.610 issued rwts: total=2660,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.610 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:58.610 job2: (groupid=0, jobs=1): err= 0: pid=1243710: Wed May 15 01:01:10 2024 00:13:58.610 read: IOPS=4456, BW=17.4MiB/s (18.3MB/s)(17.4MiB/1002msec) 00:13:58.610 slat (usec): min=2, max=8129, avg=108.78, stdev=543.58 00:13:58.610 clat (usec): min=1476, max=24351, avg=14226.19, stdev=2162.65 00:13:58.610 lat (usec): min=1481, max=24355, avg=14334.97, stdev=2151.85 00:13:58.610 clat percentiles (usec): 00:13:58.610 | 1.00th=[ 6521], 5.00th=[10552], 10.00th=[11863], 20.00th=[13173], 00:13:58.610 | 30.00th=[13435], 40.00th=[13829], 50.00th=[14222], 60.00th=[14746], 00:13:58.610 | 70.00th=[15270], 80.00th=[15926], 90.00th=[16712], 95.00th=[16909], 00:13:58.610 | 99.00th=[19006], 99.50th=[19006], 99.90th=[23200], 99.95th=[23200], 00:13:58.610 | 99.99th=[24249] 00:13:58.610 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:13:58.610 slat (usec): min=3, max=9926, avg=102.13, stdev=489.59 00:13:58.610 clat (usec): min=8318, max=26870, avg=13716.11, stdev=2651.48 00:13:58.610 lat (usec): min=8329, max=26904, avg=13818.24, stdev=2673.04 00:13:58.610 clat percentiles (usec): 00:13:58.610 | 1.00th=[ 8979], 5.00th=[10028], 10.00th=[10814], 20.00th=[11207], 00:13:58.610 | 30.00th=[11600], 40.00th=[12256], 50.00th=[13304], 60.00th=[14222], 00:13:58.610 | 70.00th=[15664], 80.00th=[16712], 90.00th=[17433], 95.00th=[17433], 00:13:58.610 | 99.00th=[18744], 99.50th=[19530], 99.90th=[24249], 99.95th=[25035], 00:13:58.610 | 99.99th=[26870] 00:13:58.610 bw ( KiB/s): min=16384, max=20521, per=28.61%, avg=18452.50, stdev=2925.30, samples=2 00:13:58.610 iops : min= 4096, max= 5130, avg=4613.00, stdev=731.15, samples=2 00:13:58.610 lat (msec) : 2=0.11%, 10=3.92%, 20=95.57%, 50=0.40% 00:13:58.610 cpu : usr=5.19%, sys=9.09%, ctx=545, majf=0, minf=1 00:13:58.610 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:58.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:58.610 issued rwts: total=4465,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.610 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:58.610 job3: (groupid=0, jobs=1): err= 0: pid=1243711: Wed May 15 01:01:10 2024 00:13:58.610 read: IOPS=2529, BW=9.88MiB/s (10.4MB/s)(10.0MiB/1012msec) 00:13:58.610 slat (usec): min=2, max=52661, avg=172.54, stdev=1659.22 00:13:58.610 clat (usec): min=2675, max=82950, avg=21411.03, stdev=16635.44 00:13:58.610 lat (usec): min=2690, max=82976, avg=21583.57, stdev=16732.15 00:13:58.610 clat percentiles (usec): 00:13:58.610 | 1.00th=[ 5276], 5.00th=[11469], 10.00th=[13304], 20.00th=[13960], 00:13:58.610 | 30.00th=[14353], 40.00th=[14615], 50.00th=[15139], 60.00th=[15664], 00:13:58.610 | 70.00th=[19792], 80.00th=[21365], 90.00th=[58459], 95.00th=[72877], 00:13:58.610 | 99.00th=[79168], 99.50th=[82314], 99.90th=[83362], 99.95th=[83362], 00:13:58.610 | 99.99th=[83362] 00:13:58.610 write: IOPS=2851, BW=11.1MiB/s (11.7MB/s)(11.3MiB/1012msec); 0 zone resets 00:13:58.610 slat (usec): min=3, max=56040, avg=185.21, stdev=1759.28 00:13:58.610 clat (usec): min=4730, max=82947, avg=25294.13, stdev=16969.25 00:13:58.610 lat (usec): min=4754, max=82976, avg=25479.34, stdev=17061.87 00:13:58.610 clat percentiles (usec): 00:13:58.610 | 1.00th=[ 5932], 5.00th=[10814], 10.00th=[13435], 20.00th=[14484], 00:13:58.610 | 30.00th=[15008], 40.00th=[15270], 50.00th=[17171], 60.00th=[20055], 00:13:58.610 | 70.00th=[24511], 80.00th=[40109], 90.00th=[59507], 95.00th=[65274], 00:13:58.610 | 99.00th=[71828], 99.50th=[73925], 99.90th=[76022], 99.95th=[83362], 00:13:58.610 | 99.99th=[83362] 00:13:58.610 bw ( KiB/s): min= 9784, max=12280, per=17.10%, avg=11032.00, stdev=1764.94, samples=2 00:13:58.610 iops : min= 2446, max= 3070, avg=2758.00, stdev=441.23, samples=2 00:13:58.610 lat (msec) : 4=0.29%, 10=3.93%, 20=60.43%, 50=23.65%, 100=11.70% 00:13:58.610 cpu : usr=2.77%, sys=4.35%, ctx=320, majf=0, minf=1 00:13:58.610 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:58.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:58.610 issued rwts: total=2560,2886,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.610 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:58.610 00:13:58.610 Run status group 0 (all jobs): 00:13:58.610 READ: bw=59.1MiB/s (62.0MB/s), 9.88MiB/s-21.9MiB/s (10.4MB/s-23.0MB/s), io=59.8MiB (62.7MB), run=1002-1012msec 00:13:58.610 WRITE: bw=63.0MiB/s (66.1MB/s), 11.1MiB/s-22.4MiB/s (11.7MB/s-23.5MB/s), io=63.8MiB (66.8MB), run=1002-1012msec 00:13:58.610 00:13:58.610 Disk stats (read/write): 00:13:58.610 nvme0n1: ios=4648/4994, merge=0/0, ticks=42218/39977, in_queue=82195, util=97.90% 00:13:58.610 nvme0n2: ios=2097/2127, merge=0/0, ticks=23393/58723, in_queue=82116, util=88.43% 00:13:58.610 nvme0n3: ios=3836/4096, merge=0/0, ticks=16582/16264, in_queue=32846, util=92.60% 00:13:58.610 nvme0n4: ios=2356/2560, merge=0/0, ticks=38840/47587, in_queue=86427, util=99.16% 00:13:58.610 01:01:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:58.610 [global] 00:13:58.610 thread=1 00:13:58.610 invalidate=1 00:13:58.610 rw=randwrite 00:13:58.610 time_based=1 00:13:58.610 runtime=1 00:13:58.610 ioengine=libaio 00:13:58.610 direct=1 00:13:58.610 bs=4096 00:13:58.610 iodepth=128 00:13:58.610 norandommap=0 00:13:58.610 numjobs=1 00:13:58.610 00:13:58.610 verify_dump=1 00:13:58.610 verify_backlog=512 00:13:58.610 verify_state_save=0 00:13:58.610 do_verify=1 00:13:58.610 verify=crc32c-intel 00:13:58.610 [job0] 00:13:58.610 filename=/dev/nvme0n1 00:13:58.610 [job1] 00:13:58.610 filename=/dev/nvme0n2 00:13:58.610 [job2] 00:13:58.610 filename=/dev/nvme0n3 00:13:58.610 [job3] 00:13:58.610 filename=/dev/nvme0n4 00:13:58.610 Could not set queue depth (nvme0n1) 00:13:58.610 Could not set queue depth (nvme0n2) 00:13:58.610 Could not set queue depth (nvme0n3) 00:13:58.610 Could not set queue depth (nvme0n4) 00:13:58.869 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:58.869 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:58.869 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:58.869 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:58.869 fio-3.35 00:13:58.869 Starting 4 threads 00:14:00.251 00:14:00.251 job0: (groupid=0, jobs=1): err= 0: pid=1243940: Wed May 15 01:01:12 2024 00:14:00.251 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:14:00.251 slat (usec): min=2, max=50379, avg=119.62, stdev=973.63 00:14:00.251 clat (usec): min=7047, max=67112, avg=14968.81, stdev=8160.62 00:14:00.251 lat (usec): min=7258, max=73284, avg=15088.43, stdev=8223.55 00:14:00.251 clat percentiles (usec): 00:14:00.251 | 1.00th=[ 7898], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10159], 00:14:00.251 | 30.00th=[10421], 40.00th=[11076], 50.00th=[12125], 60.00th=[13304], 00:14:00.251 | 70.00th=[16188], 80.00th=[18744], 90.00th=[23462], 95.00th=[27395], 00:14:00.251 | 99.00th=[61604], 99.50th=[61604], 99.90th=[65799], 99.95th=[65799], 00:14:00.251 | 99.99th=[67634] 00:14:00.251 write: IOPS=4488, BW=17.5MiB/s (18.4MB/s)(17.6MiB/1002msec); 0 zone resets 00:14:00.251 slat (usec): min=3, max=18297, avg=105.26, stdev=586.87 00:14:00.251 clat (usec): min=943, max=83230, avg=14473.43, stdev=9130.03 00:14:00.251 lat (usec): min=3160, max=83236, avg=14578.69, stdev=9150.92 00:14:00.251 clat percentiles (usec): 00:14:00.251 | 1.00th=[ 5997], 5.00th=[ 8979], 10.00th=[ 9765], 20.00th=[10945], 00:14:00.251 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12649], 60.00th=[13960], 00:14:00.251 | 70.00th=[15139], 80.00th=[16057], 90.00th=[18482], 95.00th=[19792], 00:14:00.251 | 99.00th=[82314], 99.50th=[83362], 99.90th=[83362], 99.95th=[83362], 00:14:00.251 | 99.99th=[83362] 00:14:00.251 bw ( KiB/s): min=14248, max=20712, per=35.15%, avg=17480.00, stdev=4570.74, samples=2 00:14:00.251 iops : min= 3562, max= 5178, avg=4370.00, stdev=1142.68, samples=2 00:14:00.251 lat (usec) : 1000=0.01% 00:14:00.251 lat (msec) : 4=0.37%, 10=12.44%, 20=77.92%, 50=7.68%, 100=1.57% 00:14:00.251 cpu : usr=4.90%, sys=6.89%, ctx=478, majf=0, minf=1 00:14:00.251 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:14:00.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.251 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:00.251 issued rwts: total=4096,4497,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:00.251 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:00.251 job1: (groupid=0, jobs=1): err= 0: pid=1243941: Wed May 15 01:01:12 2024 00:14:00.251 read: IOPS=1426, BW=5706KiB/s (5843kB/s)(5740KiB/1006msec) 00:14:00.251 slat (usec): min=3, max=54034, avg=345.30, stdev=2316.60 00:14:00.252 clat (usec): min=4899, max=93792, avg=43020.17, stdev=21327.68 00:14:00.252 lat (usec): min=8851, max=93798, avg=43365.47, stdev=21345.52 00:14:00.252 clat percentiles (usec): 00:14:00.252 | 1.00th=[ 8979], 5.00th=[17433], 10.00th=[19268], 20.00th=[23725], 00:14:00.252 | 30.00th=[25822], 40.00th=[29230], 50.00th=[40633], 60.00th=[47449], 00:14:00.252 | 70.00th=[56361], 80.00th=[64226], 90.00th=[76022], 95.00th=[76022], 00:14:00.252 | 99.00th=[90702], 99.50th=[90702], 99.90th=[93848], 99.95th=[93848], 00:14:00.252 | 99.99th=[93848] 00:14:00.252 write: IOPS=1526, BW=6107KiB/s (6254kB/s)(6144KiB/1006msec); 0 zone resets 00:14:00.252 slat (usec): min=4, max=79794, avg=317.18, stdev=2802.83 00:14:00.252 clat (msec): min=9, max=202, avg=35.10, stdev=33.59 00:14:00.252 lat (msec): min=9, max=202, avg=35.42, stdev=33.84 00:14:00.252 clat percentiles (msec): 00:14:00.252 | 1.00th=[ 11], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 13], 00:14:00.252 | 30.00th=[ 19], 40.00th=[ 25], 50.00th=[ 30], 60.00th=[ 32], 00:14:00.252 | 70.00th=[ 39], 80.00th=[ 46], 90.00th=[ 54], 95.00th=[ 59], 00:14:00.252 | 99.00th=[ 201], 99.50th=[ 203], 99.90th=[ 203], 99.95th=[ 203], 00:14:00.252 | 99.99th=[ 203] 00:14:00.252 bw ( KiB/s): min= 6128, max= 6160, per=12.35%, avg=6144.00, stdev=22.63, samples=2 00:14:00.252 iops : min= 1532, max= 1540, avg=1536.00, stdev= 5.66, samples=2 00:14:00.252 lat (msec) : 10=1.41%, 20=23.16%, 50=49.88%, 100=23.02%, 250=2.52% 00:14:00.252 cpu : usr=1.99%, sys=3.68%, ctx=157, majf=0, minf=1 00:14:00.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:14:00.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:00.252 issued rwts: total=1435,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:00.252 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:00.252 job2: (groupid=0, jobs=1): err= 0: pid=1243942: Wed May 15 01:01:12 2024 00:14:00.252 read: IOPS=3372, BW=13.2MiB/s (13.8MB/s)(13.3MiB/1006msec) 00:14:00.252 slat (usec): min=3, max=28612, avg=149.04, stdev=981.12 00:14:00.252 clat (usec): min=3168, max=76881, avg=19727.46, stdev=13707.20 00:14:00.252 lat (usec): min=7493, max=76885, avg=19876.50, stdev=13805.96 00:14:00.252 clat percentiles (usec): 00:14:00.252 | 1.00th=[ 8717], 5.00th=[10159], 10.00th=[10683], 20.00th=[11207], 00:14:00.252 | 30.00th=[12125], 40.00th=[13173], 50.00th=[13960], 60.00th=[15139], 00:14:00.252 | 70.00th=[16581], 80.00th=[23200], 90.00th=[45876], 95.00th=[54264], 00:14:00.252 | 99.00th=[61080], 99.50th=[65799], 99.90th=[77071], 99.95th=[77071], 00:14:00.252 | 99.99th=[77071] 00:14:00.252 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:14:00.252 slat (usec): min=4, max=9717, avg=127.64, stdev=620.36 00:14:00.252 clat (usec): min=8593, max=34444, avg=16764.46, stdev=4546.64 00:14:00.252 lat (usec): min=8611, max=34464, avg=16892.10, stdev=4585.02 00:14:00.252 clat percentiles (usec): 00:14:00.252 | 1.00th=[10290], 5.00th=[11863], 10.00th=[12780], 20.00th=[13698], 00:14:00.252 | 30.00th=[14222], 40.00th=[14746], 50.00th=[15139], 60.00th=[15533], 00:14:00.252 | 70.00th=[16712], 80.00th=[19530], 90.00th=[25035], 95.00th=[27132], 00:14:00.252 | 99.00th=[28443], 99.50th=[28705], 99.90th=[30802], 99.95th=[32637], 00:14:00.252 | 99.99th=[34341] 00:14:00.252 bw ( KiB/s): min=11824, max=16848, per=28.83%, avg=14336.00, stdev=3552.50, samples=2 00:14:00.252 iops : min= 2956, max= 4212, avg=3584.00, stdev=888.13, samples=2 00:14:00.252 lat (msec) : 4=0.01%, 10=2.26%, 20=75.88%, 50=18.98%, 100=2.87% 00:14:00.252 cpu : usr=5.47%, sys=7.16%, ctx=377, majf=0, minf=1 00:14:00.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:14:00.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:00.252 issued rwts: total=3393,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:00.252 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:00.252 job3: (groupid=0, jobs=1): err= 0: pid=1243943: Wed May 15 01:01:12 2024 00:14:00.252 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:14:00.252 slat (usec): min=3, max=21417, avg=148.23, stdev=928.56 00:14:00.252 clat (usec): min=3937, max=40807, avg=18192.97, stdev=6358.73 00:14:00.252 lat (usec): min=3974, max=42693, avg=18341.20, stdev=6415.98 00:14:00.252 clat percentiles (usec): 00:14:00.252 | 1.00th=[ 4293], 5.00th=[ 7701], 10.00th=[11338], 20.00th=[13042], 00:14:00.252 | 30.00th=[14222], 40.00th=[16712], 50.00th=[17695], 60.00th=[19006], 00:14:00.252 | 70.00th=[21365], 80.00th=[23725], 90.00th=[27132], 95.00th=[28443], 00:14:00.252 | 99.00th=[33424], 99.50th=[36439], 99.90th=[40633], 99.95th=[40633], 00:14:00.252 | 99.99th=[40633] 00:14:00.252 write: IOPS=2882, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1007msec); 0 zone resets 00:14:00.252 slat (usec): min=3, max=80174, avg=187.31, stdev=1948.54 00:14:00.252 clat (msec): min=2, max=147, avg=27.02, stdev=23.74 00:14:00.252 lat (msec): min=2, max=147, avg=27.21, stdev=23.88 00:14:00.252 clat percentiles (msec): 00:14:00.252 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 10], 20.00th=[ 12], 00:14:00.252 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 18], 60.00th=[ 20], 00:14:00.252 | 70.00th=[ 26], 80.00th=[ 41], 90.00th=[ 67], 95.00th=[ 83], 00:14:00.252 | 99.00th=[ 102], 99.50th=[ 102], 99.90th=[ 102], 99.95th=[ 102], 00:14:00.252 | 99.99th=[ 148] 00:14:00.252 bw ( KiB/s): min= 9920, max=12288, per=22.33%, avg=11104.00, stdev=1674.43, samples=2 00:14:00.252 iops : min= 2480, max= 3072, avg=2776.00, stdev=418.61, samples=2 00:14:00.252 lat (msec) : 4=0.70%, 10=9.04%, 20=52.94%, 50=28.15%, 100=8.00% 00:14:00.252 lat (msec) : 250=1.17% 00:14:00.252 cpu : usr=4.47%, sys=5.67%, ctx=310, majf=0, minf=1 00:14:00.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:14:00.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:00.252 issued rwts: total=2560,2903,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:00.252 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:00.252 00:14:00.252 Run status group 0 (all jobs): 00:14:00.252 READ: bw=44.5MiB/s (46.7MB/s), 5706KiB/s-16.0MiB/s (5843kB/s-16.7MB/s), io=44.9MiB (47.0MB), run=1002-1007msec 00:14:00.252 WRITE: bw=48.6MiB/s (50.9MB/s), 6107KiB/s-17.5MiB/s (6254kB/s-18.4MB/s), io=48.9MiB (51.3MB), run=1002-1007msec 00:14:00.252 00:14:00.252 Disk stats (read/write): 00:14:00.252 nvme0n1: ios=3210/3584, merge=0/0, ticks=17097/14192, in_queue=31289, util=98.70% 00:14:00.252 nvme0n2: ios=1163/1536, merge=0/0, ticks=10997/15231, in_queue=26228, util=89.04% 00:14:00.252 nvme0n3: ios=3129/3373, merge=0/0, ticks=17400/16710, in_queue=34110, util=95.72% 00:14:00.252 nvme0n4: ios=2568/2560, merge=0/0, ticks=21548/35817, in_queue=57365, util=97.90% 00:14:00.252 01:01:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:14:00.252 01:01:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1244080 00:14:00.252 01:01:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:00.252 01:01:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:14:00.252 [global] 00:14:00.252 thread=1 00:14:00.252 invalidate=1 00:14:00.252 rw=read 00:14:00.252 time_based=1 00:14:00.252 runtime=10 00:14:00.252 ioengine=libaio 00:14:00.252 direct=1 00:14:00.252 bs=4096 00:14:00.252 iodepth=1 00:14:00.252 norandommap=1 00:14:00.252 numjobs=1 00:14:00.252 00:14:00.252 [job0] 00:14:00.252 filename=/dev/nvme0n1 00:14:00.252 [job1] 00:14:00.252 filename=/dev/nvme0n2 00:14:00.252 [job2] 00:14:00.252 filename=/dev/nvme0n3 00:14:00.252 [job3] 00:14:00.252 filename=/dev/nvme0n4 00:14:00.252 Could not set queue depth (nvme0n1) 00:14:00.252 Could not set queue depth (nvme0n2) 00:14:00.252 Could not set queue depth (nvme0n3) 00:14:00.252 Could not set queue depth (nvme0n4) 00:14:00.252 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:00.252 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:00.252 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:00.252 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:00.252 fio-3.35 00:14:00.252 Starting 4 threads 00:14:02.844 01:01:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:03.410 01:01:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:03.410 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=1228800, buflen=4096 00:14:03.410 fio: pid=1244172, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:03.410 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=376832, buflen=4096 00:14:03.410 fio: pid=1244171, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:03.410 01:01:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:03.410 01:01:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:03.667 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=344064, buflen=4096 00:14:03.667 fio: pid=1244169, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:03.667 01:01:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:03.667 01:01:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:03.924 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=32374784, buflen=4096 00:14:03.924 fio: pid=1244170, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:03.924 01:01:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:03.924 01:01:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:04.183 00:14:04.183 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1244169: Wed May 15 01:01:16 2024 00:14:04.183 read: IOPS=25, BW=99.2KiB/s (102kB/s)(336KiB/3386msec) 00:14:04.183 slat (usec): min=12, max=6808, avg=101.22, stdev=736.19 00:14:04.183 clat (usec): min=576, max=42163, avg=40189.84, stdev=6221.28 00:14:04.183 lat (usec): min=608, max=48971, avg=40292.10, stdev=6289.99 00:14:04.183 clat percentiles (usec): 00:14:04.183 | 1.00th=[ 578], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:14:04.183 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:04.183 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:14:04.183 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:04.183 | 99.99th=[42206] 00:14:04.183 bw ( KiB/s): min= 96, max= 104, per=1.08%, avg=98.67, stdev= 4.13, samples=6 00:14:04.183 iops : min= 24, max= 26, avg=24.67, stdev= 1.03, samples=6 00:14:04.183 lat (usec) : 750=1.18%, 1000=1.18% 00:14:04.183 lat (msec) : 50=96.47% 00:14:04.183 cpu : usr=0.00%, sys=0.12%, ctx=86, majf=0, minf=1 00:14:04.183 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:04.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.183 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.183 issued rwts: total=85,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.183 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:04.183 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1244170: Wed May 15 01:01:16 2024 00:14:04.183 read: IOPS=2149, BW=8596KiB/s (8802kB/s)(30.9MiB/3678msec) 00:14:04.183 slat (usec): min=4, max=14064, avg=19.72, stdev=298.85 00:14:04.183 clat (usec): min=309, max=1320, avg=442.48, stdev=56.46 00:14:04.183 lat (usec): min=314, max=14535, avg=462.20, stdev=305.12 00:14:04.183 clat percentiles (usec): 00:14:04.183 | 1.00th=[ 322], 5.00th=[ 334], 10.00th=[ 359], 20.00th=[ 400], 00:14:04.183 | 30.00th=[ 424], 40.00th=[ 441], 50.00th=[ 449], 60.00th=[ 457], 00:14:04.183 | 70.00th=[ 474], 80.00th=[ 486], 90.00th=[ 510], 95.00th=[ 523], 00:14:04.183 | 99.00th=[ 545], 99.50th=[ 562], 99.90th=[ 619], 99.95th=[ 676], 00:14:04.183 | 99.99th=[ 1319] 00:14:04.183 bw ( KiB/s): min= 7992, max= 9720, per=94.28%, avg=8592.57, stdev=654.14, samples=7 00:14:04.183 iops : min= 1998, max= 2430, avg=2148.14, stdev=163.53, samples=7 00:14:04.183 lat (usec) : 500=86.15%, 750=13.80%, 1000=0.03% 00:14:04.183 lat (msec) : 2=0.01% 00:14:04.183 cpu : usr=1.77%, sys=3.51%, ctx=7912, majf=0, minf=1 00:14:04.183 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:04.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.183 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.183 issued rwts: total=7905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.183 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:04.183 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1244171: Wed May 15 01:01:16 2024 00:14:04.183 read: IOPS=29, BW=117KiB/s (120kB/s)(368KiB/3142msec) 00:14:04.183 slat (nsec): min=6353, max=40721, avg=19844.53, stdev=9804.32 00:14:04.183 clat (usec): min=464, max=42050, avg=34117.14, stdev=15513.75 00:14:04.183 lat (usec): min=471, max=42083, avg=34136.84, stdev=15518.39 00:14:04.183 clat percentiles (usec): 00:14:04.183 | 1.00th=[ 465], 5.00th=[ 469], 10.00th=[ 482], 20.00th=[40633], 00:14:04.183 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:04.183 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:14:04.183 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:04.183 | 99.99th=[42206] 00:14:04.183 bw ( KiB/s): min= 96, max= 216, per=1.28%, avg=117.33, stdev=48.44, samples=6 00:14:04.183 iops : min= 24, max= 54, avg=29.33, stdev=12.11, samples=6 00:14:04.183 lat (usec) : 500=10.75%, 750=6.45% 00:14:04.183 lat (msec) : 50=81.72% 00:14:04.183 cpu : usr=0.10%, sys=0.00%, ctx=93, majf=0, minf=1 00:14:04.183 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:04.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.183 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.183 issued rwts: total=93,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.183 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:04.183 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1244172: Wed May 15 01:01:16 2024 00:14:04.183 read: IOPS=103, BW=411KiB/s (421kB/s)(1200KiB/2918msec) 00:14:04.183 slat (nsec): min=5293, max=69125, avg=19332.12, stdev=11071.21 00:14:04.183 clat (usec): min=328, max=41966, avg=9699.71, stdev=17109.02 00:14:04.183 lat (usec): min=334, max=41983, avg=9719.06, stdev=17110.93 00:14:04.183 clat percentiles (usec): 00:14:04.183 | 1.00th=[ 330], 5.00th=[ 338], 10.00th=[ 343], 20.00th=[ 359], 00:14:04.183 | 30.00th=[ 371], 40.00th=[ 379], 50.00th=[ 392], 60.00th=[ 408], 00:14:04.183 | 70.00th=[ 429], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:14:04.183 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:04.183 | 99.99th=[42206] 00:14:04.183 bw ( KiB/s): min= 96, max= 1888, per=5.07%, avg=462.40, stdev=797.12, samples=5 00:14:04.183 iops : min= 24, max= 472, avg=115.60, stdev=199.28, samples=5 00:14:04.183 lat (usec) : 500=74.42%, 750=1.99%, 1000=0.33% 00:14:04.183 lat (msec) : 50=22.92% 00:14:04.183 cpu : usr=0.03%, sys=0.27%, ctx=301, majf=0, minf=1 00:14:04.183 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:04.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.183 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.183 issued rwts: total=301,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.183 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:04.183 00:14:04.183 Run status group 0 (all jobs): 00:14:04.183 READ: bw=9114KiB/s (9332kB/s), 99.2KiB/s-8596KiB/s (102kB/s-8802kB/s), io=32.7MiB (34.3MB), run=2918-3678msec 00:14:04.183 00:14:04.183 Disk stats (read/write): 00:14:04.183 nvme0n1: ios=83/0, merge=0/0, ticks=3336/0, in_queue=3336, util=95.79% 00:14:04.183 nvme0n2: ios=7722/0, merge=0/0, ticks=3355/0, in_queue=3355, util=94.96% 00:14:04.183 nvme0n3: ios=91/0, merge=0/0, ticks=3099/0, in_queue=3099, util=96.79% 00:14:04.183 nvme0n4: ios=298/0, merge=0/0, ticks=2825/0, in_queue=2825, util=96.75% 00:14:04.183 01:01:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:04.183 01:01:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:04.442 01:01:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:04.442 01:01:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:04.700 01:01:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:04.700 01:01:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:04.959 01:01:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:04.959 01:01:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:05.219 01:01:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:14:05.219 01:01:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1244080 00:14:05.219 01:01:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:14:05.219 01:01:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:05.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.478 01:01:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:05.478 01:01:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:14:05.478 01:01:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:05.478 01:01:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:05.478 01:01:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:05.478 01:01:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:05.478 01:01:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:14:05.478 01:01:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:05.478 01:01:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:05.478 nvmf hotplug test: fio failed as expected 00:14:05.478 01:01:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:05.736 01:01:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:05.736 01:01:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:05.736 01:01:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:05.736 01:01:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:05.736 01:01:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:14:05.736 01:01:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:05.736 01:01:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:14:05.736 01:01:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:05.736 01:01:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:14:05.736 01:01:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:05.736 01:01:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:05.736 rmmod nvme_tcp 00:14:05.736 rmmod nvme_fabrics 00:14:05.736 rmmod nvme_keyring 00:14:05.736 01:01:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:05.736 01:01:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:14:05.736 01:01:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:14:05.736 01:01:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1242167 ']' 00:14:05.736 01:01:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1242167 00:14:05.736 01:01:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 1242167 ']' 00:14:05.736 01:01:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 1242167 00:14:05.736 01:01:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:14:05.736 01:01:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:05.736 01:01:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1242167 00:14:05.736 01:01:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:05.736 01:01:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:05.736 01:01:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1242167' 00:14:05.736 killing process with pid 1242167 00:14:05.736 01:01:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 1242167 00:14:05.736 [2024-05-15 01:01:18.049443] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:05.736 01:01:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 1242167 00:14:05.995 01:01:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:05.995 01:01:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:05.995 01:01:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:05.995 01:01:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:05.995 01:01:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:05.995 01:01:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.995 01:01:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.995 01:01:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.539 01:01:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:08.539 00:14:08.539 real 0m23.799s 00:14:08.539 user 1m17.272s 00:14:08.539 sys 0m7.738s 00:14:08.539 01:01:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:08.539 01:01:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.539 ************************************ 00:14:08.539 END TEST nvmf_fio_target 00:14:08.539 ************************************ 00:14:08.539 01:01:20 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:08.539 01:01:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:08.539 01:01:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:08.539 01:01:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:08.539 ************************************ 00:14:08.539 START TEST nvmf_bdevio 00:14:08.539 ************************************ 00:14:08.539 01:01:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:08.539 * Looking for test storage... 00:14:08.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:14:08.540 01:01:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:11.071 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:11.071 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:14:11.071 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:11.071 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:11.071 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:11.071 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:11.071 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:11.071 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:14:11.071 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:11.071 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:14:11.071 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:14:11.071 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:14:11.071 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:14:11.071 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:14:11.071 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:14:11.071 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:11.071 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:11.071 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:11.071 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:11.071 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:11.071 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:11.071 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:11.071 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:11.071 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:11.071 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:11.072 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:11.072 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:11.072 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:11.072 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:11.072 01:01:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:11.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:11.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:14:11.072 00:14:11.072 --- 10.0.0.2 ping statistics --- 00:14:11.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.072 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:11.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:11.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:14:11.072 00:14:11.072 --- 10.0.0.1 ping statistics --- 00:14:11.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.072 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1247207 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1247207 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 1247207 ']' 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:11.072 [2024-05-15 01:01:23.090089] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:14:11.072 [2024-05-15 01:01:23.090183] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.072 EAL: No free 2048 kB hugepages reported on node 1 00:14:11.072 [2024-05-15 01:01:23.179867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:11.072 [2024-05-15 01:01:23.305992] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.072 [2024-05-15 01:01:23.306063] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.072 [2024-05-15 01:01:23.306079] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.072 [2024-05-15 01:01:23.306092] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.072 [2024-05-15 01:01:23.306104] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.072 [2024-05-15 01:01:23.306169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:11.072 [2024-05-15 01:01:23.306228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:11.072 [2024-05-15 01:01:23.306279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:11.072 [2024-05-15 01:01:23.306283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.072 01:01:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:11.332 [2024-05-15 01:01:23.461819] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:11.332 01:01:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.332 01:01:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:11.332 01:01:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.332 01:01:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:11.332 Malloc0 00:14:11.332 01:01:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.332 01:01:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:11.332 01:01:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.332 01:01:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:11.332 01:01:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.332 01:01:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:11.332 01:01:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.332 01:01:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:11.332 01:01:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.332 01:01:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:11.332 01:01:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.332 01:01:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:11.332 [2024-05-15 01:01:23.512777] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:11.332 [2024-05-15 01:01:23.513079] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.332 01:01:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.332 01:01:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:11.332 01:01:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:11.332 01:01:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:14:11.332 01:01:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:14:11.332 01:01:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:11.332 01:01:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:11.332 { 00:14:11.332 "params": { 00:14:11.332 "name": "Nvme$subsystem", 00:14:11.332 "trtype": "$TEST_TRANSPORT", 00:14:11.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:11.332 "adrfam": "ipv4", 00:14:11.332 "trsvcid": "$NVMF_PORT", 00:14:11.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:11.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:11.332 "hdgst": ${hdgst:-false}, 00:14:11.332 "ddgst": ${ddgst:-false} 00:14:11.332 }, 00:14:11.332 "method": "bdev_nvme_attach_controller" 00:14:11.332 } 00:14:11.332 EOF 00:14:11.332 )") 00:14:11.332 01:01:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:14:11.332 01:01:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:14:11.332 01:01:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:14:11.332 01:01:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:11.332 "params": { 00:14:11.332 "name": "Nvme1", 00:14:11.332 "trtype": "tcp", 00:14:11.332 "traddr": "10.0.0.2", 00:14:11.332 "adrfam": "ipv4", 00:14:11.332 "trsvcid": "4420", 00:14:11.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:11.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:11.332 "hdgst": false, 00:14:11.332 "ddgst": false 00:14:11.332 }, 00:14:11.332 "method": "bdev_nvme_attach_controller" 00:14:11.332 }' 00:14:11.332 [2024-05-15 01:01:23.557854] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:14:11.332 [2024-05-15 01:01:23.557937] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1247235 ] 00:14:11.332 EAL: No free 2048 kB hugepages reported on node 1 00:14:11.332 [2024-05-15 01:01:23.629924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:11.591 [2024-05-15 01:01:23.747171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.591 [2024-05-15 01:01:23.747219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.591 [2024-05-15 01:01:23.747223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.591 I/O targets: 00:14:11.591 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:11.591 00:14:11.591 00:14:11.591 CUnit - A unit testing framework for C - Version 2.1-3 00:14:11.591 http://cunit.sourceforge.net/ 00:14:11.591 00:14:11.591 00:14:11.591 Suite: bdevio tests on: Nvme1n1 00:14:11.848 Test: blockdev write read block ...passed 00:14:11.848 Test: blockdev write zeroes read block ...passed 00:14:11.848 Test: blockdev write zeroes read no split ...passed 00:14:11.848 Test: blockdev write zeroes read split ...passed 00:14:11.848 Test: blockdev write zeroes read split partial ...passed 00:14:11.848 Test: blockdev reset ...[2024-05-15 01:01:24.187821] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:11.848 [2024-05-15 01:01:24.187921] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15429f0 (9): Bad file descriptor 00:14:12.117 [2024-05-15 01:01:24.325690] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:12.117 passed 00:14:12.117 Test: blockdev write read 8 blocks ...passed 00:14:12.117 Test: blockdev write read size > 128k ...passed 00:14:12.117 Test: blockdev write read invalid size ...passed 00:14:12.117 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:12.117 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:12.117 Test: blockdev write read max offset ...passed 00:14:12.117 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:12.117 Test: blockdev writev readv 8 blocks ...passed 00:14:12.117 Test: blockdev writev readv 30 x 1block ...passed 00:14:12.117 Test: blockdev writev readv block ...passed 00:14:12.117 Test: blockdev writev readv size > 128k ...passed 00:14:12.117 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:12.117 Test: blockdev comparev and writev ...[2024-05-15 01:01:24.501984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.117 [2024-05-15 01:01:24.502020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:12.117 [2024-05-15 01:01:24.502045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.117 [2024-05-15 01:01:24.502062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:12.117 [2024-05-15 01:01:24.502548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.117 [2024-05-15 01:01:24.502573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:12.117 [2024-05-15 01:01:24.502594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.117 [2024-05-15 01:01:24.502610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:12.117 [2024-05-15 01:01:24.503043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.117 [2024-05-15 01:01:24.503068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:12.117 [2024-05-15 01:01:24.503090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.117 [2024-05-15 01:01:24.503106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:12.117 [2024-05-15 01:01:24.503588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.117 [2024-05-15 01:01:24.503611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:12.117 [2024-05-15 01:01:24.503632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.117 [2024-05-15 01:01:24.503648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:12.377 passed 00:14:12.377 Test: blockdev nvme passthru rw ...passed 00:14:12.377 Test: blockdev nvme passthru vendor specific ...[2024-05-15 01:01:24.587359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:12.377 [2024-05-15 01:01:24.587385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:12.377 [2024-05-15 01:01:24.587628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:12.377 [2024-05-15 01:01:24.587650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:12.377 [2024-05-15 01:01:24.587920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:12.377 [2024-05-15 01:01:24.587952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:12.377 [2024-05-15 01:01:24.588220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:12.377 [2024-05-15 01:01:24.588243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:12.377 passed 00:14:12.377 Test: blockdev nvme admin passthru ...passed 00:14:12.377 Test: blockdev copy ...passed 00:14:12.377 00:14:12.377 Run Summary: Type Total Ran Passed Failed Inactive 00:14:12.377 suites 1 1 n/a 0 0 00:14:12.377 tests 23 23 23 0 0 00:14:12.377 asserts 152 152 152 0 n/a 00:14:12.377 00:14:12.377 Elapsed time = 1.331 seconds 00:14:12.636 01:01:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:12.636 01:01:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.636 01:01:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:12.636 01:01:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.636 01:01:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:12.636 01:01:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:12.636 01:01:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:12.636 01:01:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:14:12.636 01:01:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:12.636 01:01:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:14:12.636 01:01:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:12.636 01:01:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:12.636 rmmod nvme_tcp 00:14:12.636 rmmod nvme_fabrics 00:14:12.636 rmmod nvme_keyring 00:14:12.636 01:01:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:12.636 01:01:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:14:12.636 01:01:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:14:12.636 01:01:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1247207 ']' 00:14:12.636 01:01:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1247207 00:14:12.636 01:01:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 1247207 ']' 00:14:12.636 01:01:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 1247207 00:14:12.636 01:01:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:14:12.636 01:01:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:12.636 01:01:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1247207 00:14:12.636 01:01:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:14:12.636 01:01:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:14:12.636 01:01:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1247207' 00:14:12.636 killing process with pid 1247207 00:14:12.636 01:01:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 1247207 00:14:12.636 [2024-05-15 01:01:24.965133] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:12.636 01:01:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 1247207 00:14:12.895 01:01:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:12.895 01:01:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:12.895 01:01:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:12.895 01:01:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:12.895 01:01:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:12.895 01:01:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.895 01:01:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.895 01:01:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.434 01:01:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:15.434 00:14:15.434 real 0m6.912s 00:14:15.434 user 0m10.984s 00:14:15.434 sys 0m2.405s 00:14:15.434 01:01:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:15.434 01:01:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:15.434 ************************************ 00:14:15.434 END TEST nvmf_bdevio 00:14:15.434 ************************************ 00:14:15.434 01:01:27 nvmf_tcp -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:14:15.434 01:01:27 nvmf_tcp -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:15.434 01:01:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:14:15.434 01:01:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:15.434 01:01:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:15.434 ************************************ 00:14:15.434 START TEST nvmf_bdevio_no_huge 00:14:15.434 ************************************ 00:14:15.434 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:15.434 * Looking for test storage... 00:14:15.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:15.434 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:15.434 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:15.434 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.434 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.434 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.434 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.434 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.434 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.434 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.434 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.434 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.434 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.434 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:15.434 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:15.434 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.434 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.434 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:15.434 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:15.434 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:15.434 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.434 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.434 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.435 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.435 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.435 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.435 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:15.435 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.435 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:14:15.435 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:15.435 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:15.435 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:15.435 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.435 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.435 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:15.435 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:15.435 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:15.435 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:15.435 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:15.435 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:15.435 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:15.435 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.435 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:15.435 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:15.435 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:15.435 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.435 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.435 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.435 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:15.435 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:15.435 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:14:15.435 01:01:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:17.968 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:17.969 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:17.969 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:17.969 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:17.969 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:17.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:17.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:14:17.969 00:14:17.969 --- 10.0.0.2 ping statistics --- 00:14:17.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.969 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:17.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:17.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:14:17.969 00:14:17.969 --- 10.0.0.1 ping statistics --- 00:14:17.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.969 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1249712 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1249712 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 1249712 ']' 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:17.969 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.970 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:17.970 01:01:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:17.970 [2024-05-15 01:01:29.958627] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:14:17.970 [2024-05-15 01:01:29.958717] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:17.970 [2024-05-15 01:01:30.045602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:17.970 [2024-05-15 01:01:30.154561] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.970 [2024-05-15 01:01:30.154627] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.970 [2024-05-15 01:01:30.154641] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.970 [2024-05-15 01:01:30.154667] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.970 [2024-05-15 01:01:30.154677] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.970 [2024-05-15 01:01:30.154734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:17.970 [2024-05-15 01:01:30.154767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:17.970 [2024-05-15 01:01:30.154786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:17.970 [2024-05-15 01:01:30.154789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:17.970 [2024-05-15 01:01:30.275865] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:17.970 Malloc0 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:17.970 [2024-05-15 01:01:30.313582] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:17.970 [2024-05-15 01:01:30.313855] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:17.970 { 00:14:17.970 "params": { 00:14:17.970 "name": "Nvme$subsystem", 00:14:17.970 "trtype": "$TEST_TRANSPORT", 00:14:17.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:17.970 "adrfam": "ipv4", 00:14:17.970 "trsvcid": "$NVMF_PORT", 00:14:17.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:17.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:17.970 "hdgst": ${hdgst:-false}, 00:14:17.970 "ddgst": ${ddgst:-false} 00:14:17.970 }, 00:14:17.970 "method": "bdev_nvme_attach_controller" 00:14:17.970 } 00:14:17.970 EOF 00:14:17.970 )") 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:14:17.970 01:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:17.970 "params": { 00:14:17.970 "name": "Nvme1", 00:14:17.970 "trtype": "tcp", 00:14:17.970 "traddr": "10.0.0.2", 00:14:17.970 "adrfam": "ipv4", 00:14:17.970 "trsvcid": "4420", 00:14:17.970 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:17.970 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:17.970 "hdgst": false, 00:14:17.970 "ddgst": false 00:14:17.970 }, 00:14:17.970 "method": "bdev_nvme_attach_controller" 00:14:17.970 }' 00:14:17.970 [2024-05-15 01:01:30.356299] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:14:17.970 [2024-05-15 01:01:30.356382] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1249741 ] 00:14:18.229 [2024-05-15 01:01:30.434493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:18.229 [2024-05-15 01:01:30.547773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.229 [2024-05-15 01:01:30.547820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:18.229 [2024-05-15 01:01:30.547823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.485 I/O targets: 00:14:18.485 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:18.485 00:14:18.485 00:14:18.485 CUnit - A unit testing framework for C - Version 2.1-3 00:14:18.485 http://cunit.sourceforge.net/ 00:14:18.485 00:14:18.485 00:14:18.485 Suite: bdevio tests on: Nvme1n1 00:14:18.485 Test: blockdev write read block ...passed 00:14:18.485 Test: blockdev write zeroes read block ...passed 00:14:18.485 Test: blockdev write zeroes read no split ...passed 00:14:18.485 Test: blockdev write zeroes read split ...passed 00:14:18.743 Test: blockdev write zeroes read split partial ...passed 00:14:18.743 Test: blockdev reset ...[2024-05-15 01:01:30.936403] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:18.743 [2024-05-15 01:01:30.936505] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x194d340 (9): Bad file descriptor 00:14:18.743 [2024-05-15 01:01:30.948479] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:18.743 passed 00:14:18.743 Test: blockdev write read 8 blocks ...passed 00:14:18.743 Test: blockdev write read size > 128k ...passed 00:14:18.743 Test: blockdev write read invalid size ...passed 00:14:18.743 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:18.743 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:18.743 Test: blockdev write read max offset ...passed 00:14:18.743 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:18.743 Test: blockdev writev readv 8 blocks ...passed 00:14:18.743 Test: blockdev writev readv 30 x 1block ...passed 00:14:18.743 Test: blockdev writev readv block ...passed 00:14:18.743 Test: blockdev writev readv size > 128k ...passed 00:14:18.743 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:18.743 Test: blockdev comparev and writev ...[2024-05-15 01:01:31.129194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:18.743 [2024-05-15 01:01:31.129231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:18.743 [2024-05-15 01:01:31.129256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:18.743 [2024-05-15 01:01:31.129273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:18.743 [2024-05-15 01:01:31.129717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:18.743 [2024-05-15 01:01:31.129742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:18.743 [2024-05-15 01:01:31.129764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:18.743 [2024-05-15 01:01:31.129781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:18.743 [2024-05-15 01:01:31.130251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:18.743 [2024-05-15 01:01:31.130276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:18.743 [2024-05-15 01:01:31.130298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:18.743 [2024-05-15 01:01:31.130315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:18.743 [2024-05-15 01:01:31.130766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:18.744 [2024-05-15 01:01:31.130789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:18.744 [2024-05-15 01:01:31.130811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:18.744 [2024-05-15 01:01:31.130827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:19.003 passed 00:14:19.003 Test: blockdev nvme passthru rw ...passed 00:14:19.003 Test: blockdev nvme passthru vendor specific ...[2024-05-15 01:01:31.213408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:19.003 [2024-05-15 01:01:31.213437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:19.003 [2024-05-15 01:01:31.213729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:19.003 [2024-05-15 01:01:31.213753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:19.003 [2024-05-15 01:01:31.214038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:19.003 [2024-05-15 01:01:31.214062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:19.003 [2024-05-15 01:01:31.214350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:19.003 [2024-05-15 01:01:31.214373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:19.003 passed 00:14:19.003 Test: blockdev nvme admin passthru ...passed 00:14:19.003 Test: blockdev copy ...passed 00:14:19.003 00:14:19.003 Run Summary: Type Total Ran Passed Failed Inactive 00:14:19.003 suites 1 1 n/a 0 0 00:14:19.003 tests 23 23 23 0 0 00:14:19.003 asserts 152 152 152 0 n/a 00:14:19.003 00:14:19.003 Elapsed time = 1.120 seconds 00:14:19.260 01:01:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:19.260 01:01:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.260 01:01:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:19.260 01:01:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.260 01:01:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:19.260 01:01:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:19.260 01:01:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:19.260 01:01:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:14:19.260 01:01:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:19.260 01:01:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:14:19.260 01:01:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:19.260 01:01:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:19.260 rmmod nvme_tcp 00:14:19.519 rmmod nvme_fabrics 00:14:19.519 rmmod nvme_keyring 00:14:19.519 01:01:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:19.519 01:01:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:14:19.519 01:01:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:14:19.519 01:01:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1249712 ']' 00:14:19.519 01:01:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1249712 00:14:19.519 01:01:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 1249712 ']' 00:14:19.519 01:01:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 1249712 00:14:19.519 01:01:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:14:19.519 01:01:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:19.519 01:01:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1249712 00:14:19.519 01:01:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:14:19.519 01:01:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:14:19.519 01:01:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1249712' 00:14:19.519 killing process with pid 1249712 00:14:19.519 01:01:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 1249712 00:14:19.519 [2024-05-15 01:01:31.712602] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:19.519 01:01:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 1249712 00:14:19.777 01:01:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:19.777 01:01:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:19.777 01:01:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:19.777 01:01:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:19.777 01:01:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:19.777 01:01:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.777 01:01:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.777 01:01:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.308 01:01:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:22.308 00:14:22.308 real 0m6.808s 00:14:22.308 user 0m10.328s 00:14:22.308 sys 0m2.729s 00:14:22.308 01:01:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:22.308 01:01:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:22.308 ************************************ 00:14:22.308 END TEST nvmf_bdevio_no_huge 00:14:22.308 ************************************ 00:14:22.308 01:01:34 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:22.308 01:01:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:22.308 01:01:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:22.308 01:01:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:22.308 ************************************ 00:14:22.308 START TEST nvmf_tls 00:14:22.308 ************************************ 00:14:22.308 01:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:22.308 * Looking for test storage... 00:14:22.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:22.308 01:01:34 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:22.308 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:22.308 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:22.308 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:22.308 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:22.308 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:14:22.309 01:01:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:24.835 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:24.835 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:14:24.835 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:24.835 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:24.835 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:24.835 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:24.835 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:24.835 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:14:24.835 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:24.835 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:14:24.835 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:14:24.835 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:24.836 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:24.836 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:24.836 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:24.836 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:24.836 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:24.836 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:14:24.836 00:14:24.836 --- 10.0.0.2 ping statistics --- 00:14:24.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.836 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:24.836 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:24.836 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:14:24.836 00:14:24.836 --- 10.0.0.1 ping statistics --- 00:14:24.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.836 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1252223 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1252223 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1252223 ']' 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:24.836 01:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:24.836 [2024-05-15 01:01:36.806647] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:14:24.836 [2024-05-15 01:01:36.806739] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.836 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.836 [2024-05-15 01:01:36.885295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.836 [2024-05-15 01:01:36.994662] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:24.836 [2024-05-15 01:01:36.994719] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:24.836 [2024-05-15 01:01:36.994747] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:24.836 [2024-05-15 01:01:36.994766] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:24.836 [2024-05-15 01:01:36.994777] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:24.836 [2024-05-15 01:01:36.994803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.401 01:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:25.401 01:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:25.401 01:01:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:25.401 01:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:25.401 01:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:25.660 01:01:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.660 01:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:14:25.660 01:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:25.660 true 00:14:25.918 01:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:25.918 01:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:14:25.918 01:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:14:25.918 01:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:14:25.918 01:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:26.176 01:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:26.176 01:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:14:26.434 01:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:14:26.434 01:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:14:26.434 01:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:26.692 01:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:26.692 01:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:14:26.949 01:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:14:26.949 01:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:14:26.949 01:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:26.949 01:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:14:27.207 01:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:14:27.207 01:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:14:27.207 01:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:27.465 01:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:14:27.465 01:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:27.722 01:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:14:27.722 01:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:14:27.722 01:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:27.981 01:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:27.981 01:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:14:28.240 01:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:14:28.240 01:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:14:28.240 01:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:28.240 01:01:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:28.240 01:01:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:28.240 01:01:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:28.240 01:01:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:14:28.240 01:01:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:14:28.240 01:01:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:28.240 01:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:28.240 01:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:28.240 01:01:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:28.240 01:01:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:28.240 01:01:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:28.240 01:01:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:14:28.240 01:01:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:14:28.240 01:01:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:28.240 01:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:28.240 01:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:14:28.240 01:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.Bu4qD7vXNp 00:14:28.240 01:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:14:28.240 01:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.PdrcTkQHfC 00:14:28.240 01:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:28.240 01:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:28.240 01:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.Bu4qD7vXNp 00:14:28.240 01:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.PdrcTkQHfC 00:14:28.240 01:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:28.498 01:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:14:29.063 01:01:41 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.Bu4qD7vXNp 00:14:29.063 01:01:41 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Bu4qD7vXNp 00:14:29.063 01:01:41 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:29.063 [2024-05-15 01:01:41.418417] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:29.063 01:01:41 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:29.321 01:01:41 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:29.581 [2024-05-15 01:01:41.903677] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:29.581 [2024-05-15 01:01:41.903797] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:29.581 [2024-05-15 01:01:41.904015] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.581 01:01:41 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:29.839 malloc0 00:14:29.839 01:01:42 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:30.096 01:01:42 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Bu4qD7vXNp 00:14:30.354 [2024-05-15 01:01:42.649547] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:30.354 01:01:42 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Bu4qD7vXNp 00:14:30.354 EAL: No free 2048 kB hugepages reported on node 1 00:14:42.563 Initializing NVMe Controllers 00:14:42.563 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:42.563 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:42.563 Initialization complete. Launching workers. 00:14:42.563 ======================================================== 00:14:42.563 Latency(us) 00:14:42.563 Device Information : IOPS MiB/s Average min max 00:14:42.563 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7555.54 29.51 8473.54 1122.71 10043.57 00:14:42.563 ======================================================== 00:14:42.563 Total : 7555.54 29.51 8473.54 1122.71 10043.57 00:14:42.563 00:14:42.563 01:01:52 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Bu4qD7vXNp 00:14:42.563 01:01:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:42.563 01:01:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:42.563 01:01:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:42.563 01:01:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Bu4qD7vXNp' 00:14:42.563 01:01:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:42.563 01:01:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1254125 00:14:42.563 01:01:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:42.563 01:01:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:42.563 01:01:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1254125 /var/tmp/bdevperf.sock 00:14:42.563 01:01:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1254125 ']' 00:14:42.563 01:01:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:42.563 01:01:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:42.563 01:01:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:42.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:42.563 01:01:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:42.563 01:01:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:42.563 [2024-05-15 01:01:52.809707] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:14:42.563 [2024-05-15 01:01:52.809790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1254125 ] 00:14:42.563 EAL: No free 2048 kB hugepages reported on node 1 00:14:42.563 [2024-05-15 01:01:52.876523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.563 [2024-05-15 01:01:52.983032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:42.563 01:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:42.563 01:01:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:42.563 01:01:53 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Bu4qD7vXNp 00:14:42.563 [2024-05-15 01:01:53.306306] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:42.563 [2024-05-15 01:01:53.306433] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:42.563 TLSTESTn1 00:14:42.563 01:01:53 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:42.563 Running I/O for 10 seconds... 00:14:52.578 00:14:52.578 Latency(us) 00:14:52.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.578 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:52.578 Verification LBA range: start 0x0 length 0x2000 00:14:52.578 TLSTESTn1 : 10.09 1344.22 5.25 0.00 0.00 94875.14 9466.31 124275.67 00:14:52.578 =================================================================================================================== 00:14:52.578 Total : 1344.22 5.25 0.00 0.00 94875.14 9466.31 124275.67 00:14:52.578 0 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1254125 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1254125 ']' 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1254125 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1254125 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1254125' 00:14:52.578 killing process with pid 1254125 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1254125 00:14:52.578 Received shutdown signal, test time was about 10.000000 seconds 00:14:52.578 00:14:52.578 Latency(us) 00:14:52.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.578 =================================================================================================================== 00:14:52.578 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:52.578 [2024-05-15 01:02:03.655337] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1254125 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PdrcTkQHfC 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PdrcTkQHfC 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PdrcTkQHfC 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.PdrcTkQHfC' 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1255448 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1255448 /var/tmp/bdevperf.sock 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1255448 ']' 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:52.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:52.578 01:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:52.578 [2024-05-15 01:02:03.953649] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:14:52.578 [2024-05-15 01:02:03.953733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255448 ] 00:14:52.578 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.578 [2024-05-15 01:02:04.024047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.578 [2024-05-15 01:02:04.132844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PdrcTkQHfC 00:14:52.578 [2024-05-15 01:02:04.452546] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:52.578 [2024-05-15 01:02:04.452659] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:52.578 [2024-05-15 01:02:04.461790] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:52.578 [2024-05-15 01:02:04.462561] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x699130 (107): Transport endpoint is not connected 00:14:52.578 [2024-05-15 01:02:04.463547] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x699130 (9): Bad file descriptor 00:14:52.578 [2024-05-15 01:02:04.464559] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:52.578 [2024-05-15 01:02:04.464580] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:52.578 [2024-05-15 01:02:04.464598] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:52.578 request: 00:14:52.578 { 00:14:52.578 "name": "TLSTEST", 00:14:52.578 "trtype": "tcp", 00:14:52.578 "traddr": "10.0.0.2", 00:14:52.578 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:52.578 "adrfam": "ipv4", 00:14:52.578 "trsvcid": "4420", 00:14:52.578 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:52.578 "psk": "/tmp/tmp.PdrcTkQHfC", 00:14:52.578 "method": "bdev_nvme_attach_controller", 00:14:52.578 "req_id": 1 00:14:52.578 } 00:14:52.578 Got JSON-RPC error response 00:14:52.578 response: 00:14:52.578 { 00:14:52.578 "code": -32602, 00:14:52.578 "message": "Invalid parameters" 00:14:52.578 } 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1255448 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1255448 ']' 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1255448 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1255448 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1255448' 00:14:52.578 killing process with pid 1255448 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1255448 00:14:52.578 Received shutdown signal, test time was about 10.000000 seconds 00:14:52.578 00:14:52.578 Latency(us) 00:14:52.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.578 =================================================================================================================== 00:14:52.578 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:52.578 [2024-05-15 01:02:04.516316] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1255448 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Bu4qD7vXNp 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Bu4qD7vXNp 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Bu4qD7vXNp 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Bu4qD7vXNp' 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1255468 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1255468 /var/tmp/bdevperf.sock 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1255468 ']' 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:52.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:52.578 01:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:52.578 [2024-05-15 01:02:04.819369] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:14:52.578 [2024-05-15 01:02:04.819447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255468 ] 00:14:52.578 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.578 [2024-05-15 01:02:04.889175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.836 [2024-05-15 01:02:04.996708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:52.836 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:52.836 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:52.836 01:02:05 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.Bu4qD7vXNp 00:14:53.093 [2024-05-15 01:02:05.337483] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:53.093 [2024-05-15 01:02:05.337607] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:53.093 [2024-05-15 01:02:05.344018] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:53.093 [2024-05-15 01:02:05.344049] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:53.093 [2024-05-15 01:02:05.344107] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:53.093 [2024-05-15 01:02:05.344608] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2300130 (107): Transport endpoint is not connected 00:14:53.093 [2024-05-15 01:02:05.345595] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2300130 (9): Bad file descriptor 00:14:53.093 [2024-05-15 01:02:05.346594] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:53.093 [2024-05-15 01:02:05.346612] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:53.093 [2024-05-15 01:02:05.346644] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:53.093 request: 00:14:53.093 { 00:14:53.093 "name": "TLSTEST", 00:14:53.093 "trtype": "tcp", 00:14:53.093 "traddr": "10.0.0.2", 00:14:53.093 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:53.093 "adrfam": "ipv4", 00:14:53.093 "trsvcid": "4420", 00:14:53.093 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.093 "psk": "/tmp/tmp.Bu4qD7vXNp", 00:14:53.093 "method": "bdev_nvme_attach_controller", 00:14:53.093 "req_id": 1 00:14:53.093 } 00:14:53.093 Got JSON-RPC error response 00:14:53.093 response: 00:14:53.093 { 00:14:53.093 "code": -32602, 00:14:53.093 "message": "Invalid parameters" 00:14:53.093 } 00:14:53.093 01:02:05 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1255468 00:14:53.093 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1255468 ']' 00:14:53.093 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1255468 00:14:53.093 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:53.093 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:53.093 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1255468 00:14:53.093 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:53.093 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:53.093 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1255468' 00:14:53.093 killing process with pid 1255468 00:14:53.093 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1255468 00:14:53.093 Received shutdown signal, test time was about 10.000000 seconds 00:14:53.093 00:14:53.093 Latency(us) 00:14:53.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.093 =================================================================================================================== 00:14:53.093 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:53.093 [2024-05-15 01:02:05.393319] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:53.093 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1255468 00:14:53.352 01:02:05 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:53.352 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:53.352 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:53.352 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:53.352 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:53.352 01:02:05 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Bu4qD7vXNp 00:14:53.352 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:53.352 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Bu4qD7vXNp 00:14:53.352 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:53.352 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:53.352 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:53.352 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:53.352 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Bu4qD7vXNp 00:14:53.352 01:02:05 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:53.352 01:02:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:53.352 01:02:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:53.352 01:02:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Bu4qD7vXNp' 00:14:53.352 01:02:05 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:53.352 01:02:05 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1255608 00:14:53.352 01:02:05 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:53.352 01:02:05 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:53.352 01:02:05 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1255608 /var/tmp/bdevperf.sock 00:14:53.352 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1255608 ']' 00:14:53.352 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:53.352 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:53.352 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:53.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:53.352 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:53.352 01:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:53.352 [2024-05-15 01:02:05.703487] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:14:53.352 [2024-05-15 01:02:05.703571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255608 ] 00:14:53.352 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.610 [2024-05-15 01:02:05.771692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.610 [2024-05-15 01:02:05.879862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.541 01:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:54.541 01:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:54.541 01:02:06 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Bu4qD7vXNp 00:14:54.798 [2024-05-15 01:02:06.950954] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:54.798 [2024-05-15 01:02:06.951094] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:54.798 [2024-05-15 01:02:06.958139] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:54.798 [2024-05-15 01:02:06.958169] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:54.798 [2024-05-15 01:02:06.958223] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:54.798 [2024-05-15 01:02:06.959062] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1d130 (107): Transport endpoint is not connected 00:14:54.798 [2024-05-15 01:02:06.960050] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1d130 (9): Bad file descriptor 00:14:54.798 [2024-05-15 01:02:06.961048] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:14:54.798 [2024-05-15 01:02:06.961068] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:54.798 [2024-05-15 01:02:06.961086] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:14:54.798 request: 00:14:54.798 { 00:14:54.798 "name": "TLSTEST", 00:14:54.798 "trtype": "tcp", 00:14:54.798 "traddr": "10.0.0.2", 00:14:54.798 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:54.798 "adrfam": "ipv4", 00:14:54.798 "trsvcid": "4420", 00:14:54.798 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:54.798 "psk": "/tmp/tmp.Bu4qD7vXNp", 00:14:54.798 "method": "bdev_nvme_attach_controller", 00:14:54.798 "req_id": 1 00:14:54.798 } 00:14:54.798 Got JSON-RPC error response 00:14:54.798 response: 00:14:54.798 { 00:14:54.798 "code": -32602, 00:14:54.798 "message": "Invalid parameters" 00:14:54.798 } 00:14:54.798 01:02:06 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1255608 00:14:54.798 01:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1255608 ']' 00:14:54.798 01:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1255608 00:14:54.798 01:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:54.798 01:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:54.798 01:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1255608 00:14:54.798 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:54.798 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:54.798 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1255608' 00:14:54.798 killing process with pid 1255608 00:14:54.798 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1255608 00:14:54.798 Received shutdown signal, test time was about 10.000000 seconds 00:14:54.798 00:14:54.798 Latency(us) 00:14:54.798 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.798 =================================================================================================================== 00:14:54.798 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:54.798 [2024-05-15 01:02:07.012057] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:54.798 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1255608 00:14:55.056 01:02:07 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:55.056 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:55.056 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:55.056 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:55.056 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:55.056 01:02:07 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:55.056 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:55.056 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:55.056 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:55.056 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:55.056 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:55.056 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:55.056 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:55.056 01:02:07 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:55.056 01:02:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:55.056 01:02:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:55.056 01:02:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:55.056 01:02:07 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:55.056 01:02:07 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1255873 00:14:55.056 01:02:07 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:55.056 01:02:07 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:55.056 01:02:07 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1255873 /var/tmp/bdevperf.sock 00:14:55.056 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1255873 ']' 00:14:55.056 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:55.056 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:55.056 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:55.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:55.056 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:55.056 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:55.056 [2024-05-15 01:02:07.311319] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:14:55.056 [2024-05-15 01:02:07.311400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255873 ] 00:14:55.056 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.056 [2024-05-15 01:02:07.379485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.314 [2024-05-15 01:02:07.489328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:55.314 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:55.314 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:55.314 01:02:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:55.572 [2024-05-15 01:02:07.830526] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:55.572 [2024-05-15 01:02:07.832694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x125cab0 (9): Bad file descriptor 00:14:55.572 [2024-05-15 01:02:07.833689] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:55.572 [2024-05-15 01:02:07.833708] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:55.572 [2024-05-15 01:02:07.833740] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:55.572 request: 00:14:55.572 { 00:14:55.572 "name": "TLSTEST", 00:14:55.572 "trtype": "tcp", 00:14:55.572 "traddr": "10.0.0.2", 00:14:55.572 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:55.572 "adrfam": "ipv4", 00:14:55.572 "trsvcid": "4420", 00:14:55.572 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.572 "method": "bdev_nvme_attach_controller", 00:14:55.572 "req_id": 1 00:14:55.572 } 00:14:55.572 Got JSON-RPC error response 00:14:55.572 response: 00:14:55.572 { 00:14:55.572 "code": -32602, 00:14:55.572 "message": "Invalid parameters" 00:14:55.572 } 00:14:55.572 01:02:07 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1255873 00:14:55.572 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1255873 ']' 00:14:55.572 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1255873 00:14:55.572 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:55.572 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:55.572 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1255873 00:14:55.572 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:55.572 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:55.572 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1255873' 00:14:55.572 killing process with pid 1255873 00:14:55.572 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1255873 00:14:55.572 Received shutdown signal, test time was about 10.000000 seconds 00:14:55.572 00:14:55.572 Latency(us) 00:14:55.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.572 =================================================================================================================== 00:14:55.572 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:55.572 01:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1255873 00:14:55.829 01:02:08 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:55.829 01:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:55.829 01:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:55.829 01:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:55.829 01:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:55.829 01:02:08 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1252223 00:14:55.829 01:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1252223 ']' 00:14:55.829 01:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1252223 00:14:55.829 01:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:55.829 01:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:55.829 01:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1252223 00:14:55.829 01:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:55.829 01:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:55.829 01:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1252223' 00:14:55.829 killing process with pid 1252223 00:14:55.829 01:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1252223 00:14:55.829 [2024-05-15 01:02:08.173454] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:55.829 [2024-05-15 01:02:08.173510] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:55.829 01:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1252223 00:14:56.086 01:02:08 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:56.086 01:02:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:56.086 01:02:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:56.086 01:02:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:56.086 01:02:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:56.086 01:02:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:14:56.086 01:02:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:56.345 01:02:08 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:56.345 01:02:08 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:14:56.345 01:02:08 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.Dq0oGnZBHQ 00:14:56.345 01:02:08 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:56.345 01:02:08 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.Dq0oGnZBHQ 00:14:56.345 01:02:08 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:14:56.345 01:02:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:56.345 01:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:56.345 01:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:56.345 01:02:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1256019 00:14:56.345 01:02:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:56.345 01:02:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1256019 00:14:56.345 01:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1256019 ']' 00:14:56.345 01:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.345 01:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:56.345 01:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.345 01:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:56.345 01:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:56.345 [2024-05-15 01:02:08.570957] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:14:56.345 [2024-05-15 01:02:08.571042] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.345 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.345 [2024-05-15 01:02:08.646722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.602 [2024-05-15 01:02:08.762151] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.602 [2024-05-15 01:02:08.762225] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.602 [2024-05-15 01:02:08.762240] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.602 [2024-05-15 01:02:08.762253] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.602 [2024-05-15 01:02:08.762263] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.602 [2024-05-15 01:02:08.762299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.167 01:02:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:57.167 01:02:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:57.167 01:02:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:57.424 01:02:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:57.424 01:02:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:57.424 01:02:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.424 01:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.Dq0oGnZBHQ 00:14:57.424 01:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Dq0oGnZBHQ 00:14:57.424 01:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:57.682 [2024-05-15 01:02:09.822005] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.682 01:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:57.941 01:02:10 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:58.199 [2024-05-15 01:02:10.355353] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:58.199 [2024-05-15 01:02:10.355459] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:58.199 [2024-05-15 01:02:10.355676] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.199 01:02:10 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:58.457 malloc0 00:14:58.457 01:02:10 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:58.714 01:02:10 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Dq0oGnZBHQ 00:14:58.714 [2024-05-15 01:02:11.093585] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:58.973 01:02:11 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Dq0oGnZBHQ 00:14:58.973 01:02:11 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:58.973 01:02:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:58.973 01:02:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:58.973 01:02:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Dq0oGnZBHQ' 00:14:58.973 01:02:11 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:58.973 01:02:11 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1256316 00:14:58.973 01:02:11 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:58.973 01:02:11 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:58.973 01:02:11 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1256316 /var/tmp/bdevperf.sock 00:14:58.973 01:02:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1256316 ']' 00:14:58.973 01:02:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:58.973 01:02:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:58.973 01:02:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:58.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:58.973 01:02:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:58.973 01:02:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:58.973 [2024-05-15 01:02:11.157733] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:14:58.973 [2024-05-15 01:02:11.157810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1256316 ] 00:14:58.973 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.973 [2024-05-15 01:02:11.226403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.973 [2024-05-15 01:02:11.332698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:59.231 01:02:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:59.231 01:02:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:59.231 01:02:11 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Dq0oGnZBHQ 00:14:59.490 [2024-05-15 01:02:11.673531] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:59.490 [2024-05-15 01:02:11.673653] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:59.490 TLSTESTn1 00:14:59.490 01:02:11 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:59.490 Running I/O for 10 seconds... 00:15:11.704 00:15:11.704 Latency(us) 00:15:11.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.704 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:11.704 Verification LBA range: start 0x0 length 0x2000 00:15:11.704 TLSTESTn1 : 10.08 1365.06 5.33 0.00 0.00 93450.44 9757.58 124275.67 00:15:11.704 =================================================================================================================== 00:15:11.704 Total : 1365.06 5.33 0.00 0.00 93450.44 9757.58 124275.67 00:15:11.704 0 00:15:11.704 01:02:21 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:11.704 01:02:21 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1256316 00:15:11.704 01:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1256316 ']' 00:15:11.704 01:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1256316 00:15:11.704 01:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:11.704 01:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:11.704 01:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1256316 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1256316' 00:15:11.704 killing process with pid 1256316 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1256316 00:15:11.704 Received shutdown signal, test time was about 10.000000 seconds 00:15:11.704 00:15:11.704 Latency(us) 00:15:11.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.704 =================================================================================================================== 00:15:11.704 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:11.704 [2024-05-15 01:02:22.008099] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1256316 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.Dq0oGnZBHQ 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Dq0oGnZBHQ 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Dq0oGnZBHQ 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Dq0oGnZBHQ 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Dq0oGnZBHQ' 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1257634 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1257634 /var/tmp/bdevperf.sock 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1257634 ']' 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:11.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.704 [2024-05-15 01:02:22.323187] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:15:11.704 [2024-05-15 01:02:22.323267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1257634 ] 00:15:11.704 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.704 [2024-05-15 01:02:22.389911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.704 [2024-05-15 01:02:22.495191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Dq0oGnZBHQ 00:15:11.704 [2024-05-15 01:02:22.827482] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:11.704 [2024-05-15 01:02:22.827550] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:15:11.704 [2024-05-15 01:02:22.827565] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.Dq0oGnZBHQ 00:15:11.704 request: 00:15:11.704 { 00:15:11.704 "name": "TLSTEST", 00:15:11.704 "trtype": "tcp", 00:15:11.704 "traddr": "10.0.0.2", 00:15:11.704 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:11.704 "adrfam": "ipv4", 00:15:11.704 "trsvcid": "4420", 00:15:11.704 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.704 "psk": "/tmp/tmp.Dq0oGnZBHQ", 00:15:11.704 "method": "bdev_nvme_attach_controller", 00:15:11.704 "req_id": 1 00:15:11.704 } 00:15:11.704 Got JSON-RPC error response 00:15:11.704 response: 00:15:11.704 { 00:15:11.704 "code": -1, 00:15:11.704 "message": "Operation not permitted" 00:15:11.704 } 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1257634 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1257634 ']' 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1257634 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1257634 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1257634' 00:15:11.704 killing process with pid 1257634 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1257634 00:15:11.704 Received shutdown signal, test time was about 10.000000 seconds 00:15:11.704 00:15:11.704 Latency(us) 00:15:11.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.704 =================================================================================================================== 00:15:11.704 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:11.704 01:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1257634 00:15:11.705 01:02:23 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:11.705 01:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:11.705 01:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:11.705 01:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:11.705 01:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:11.705 01:02:23 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1256019 00:15:11.705 01:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1256019 ']' 00:15:11.705 01:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1256019 00:15:11.705 01:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:11.705 01:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:11.705 01:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1256019 00:15:11.705 01:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:11.705 01:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:11.705 01:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1256019' 00:15:11.705 killing process with pid 1256019 00:15:11.705 01:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1256019 00:15:11.705 [2024-05-15 01:02:23.166762] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:11.705 [2024-05-15 01:02:23.166820] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:11.705 01:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1256019 00:15:11.705 01:02:23 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:15:11.705 01:02:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:11.705 01:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:11.705 01:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.705 01:02:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1257787 00:15:11.705 01:02:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:11.705 01:02:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1257787 00:15:11.705 01:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1257787 ']' 00:15:11.705 01:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.705 01:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:11.705 01:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.705 01:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:11.705 01:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.705 [2024-05-15 01:02:23.504648] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:15:11.705 [2024-05-15 01:02:23.504739] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.705 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.705 [2024-05-15 01:02:23.584849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.705 [2024-05-15 01:02:23.698377] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.705 [2024-05-15 01:02:23.698444] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.705 [2024-05-15 01:02:23.698460] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:11.705 [2024-05-15 01:02:23.698474] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:11.705 [2024-05-15 01:02:23.698485] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.705 [2024-05-15 01:02:23.698522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.271 01:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:12.271 01:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:12.271 01:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:12.271 01:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:12.271 01:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:12.271 01:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.271 01:02:24 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.Dq0oGnZBHQ 00:15:12.271 01:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:12.271 01:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Dq0oGnZBHQ 00:15:12.271 01:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:15:12.271 01:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:12.271 01:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:15:12.271 01:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:12.271 01:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.Dq0oGnZBHQ 00:15:12.271 01:02:24 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Dq0oGnZBHQ 00:15:12.271 01:02:24 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:12.529 [2024-05-15 01:02:24.767489] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.529 01:02:24 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:12.819 01:02:25 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:13.082 [2024-05-15 01:02:25.240709] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:13.082 [2024-05-15 01:02:25.240806] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:13.082 [2024-05-15 01:02:25.241051] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:13.082 01:02:25 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:13.340 malloc0 00:15:13.340 01:02:25 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:13.598 01:02:25 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Dq0oGnZBHQ 00:15:13.856 [2024-05-15 01:02:26.067078] tcp.c:3572:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:15:13.856 [2024-05-15 01:02:26.067120] tcp.c:3658:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:15:13.856 [2024-05-15 01:02:26.067152] subsystem.c:1030:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:15:13.856 request: 00:15:13.856 { 00:15:13.856 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:13.856 "host": "nqn.2016-06.io.spdk:host1", 00:15:13.856 "psk": "/tmp/tmp.Dq0oGnZBHQ", 00:15:13.856 "method": "nvmf_subsystem_add_host", 00:15:13.856 "req_id": 1 00:15:13.856 } 00:15:13.856 Got JSON-RPC error response 00:15:13.856 response: 00:15:13.856 { 00:15:13.856 "code": -32603, 00:15:13.856 "message": "Internal error" 00:15:13.856 } 00:15:13.856 01:02:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:13.856 01:02:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:13.856 01:02:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:13.856 01:02:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:13.856 01:02:26 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1257787 00:15:13.856 01:02:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1257787 ']' 00:15:13.856 01:02:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1257787 00:15:13.856 01:02:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:13.856 01:02:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:13.856 01:02:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1257787 00:15:13.856 01:02:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:13.856 01:02:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:13.856 01:02:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1257787' 00:15:13.856 killing process with pid 1257787 00:15:13.856 01:02:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1257787 00:15:13.856 [2024-05-15 01:02:26.121190] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:13.856 01:02:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1257787 00:15:14.114 01:02:26 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.Dq0oGnZBHQ 00:15:14.114 01:02:26 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:15:14.114 01:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:14.114 01:02:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:14.115 01:02:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:14.115 01:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1258098 00:15:14.115 01:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:14.115 01:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1258098 00:15:14.115 01:02:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1258098 ']' 00:15:14.115 01:02:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.115 01:02:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:14.115 01:02:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.115 01:02:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:14.115 01:02:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:14.115 [2024-05-15 01:02:26.461851] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:15:14.115 [2024-05-15 01:02:26.461965] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:14.373 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.373 [2024-05-15 01:02:26.546867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.373 [2024-05-15 01:02:26.657142] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:14.373 [2024-05-15 01:02:26.657196] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:14.373 [2024-05-15 01:02:26.657224] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:14.373 [2024-05-15 01:02:26.657235] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:14.373 [2024-05-15 01:02:26.657244] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:14.373 [2024-05-15 01:02:26.657270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.307 01:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:15.307 01:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:15.307 01:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:15.307 01:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:15.307 01:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:15.307 01:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:15.307 01:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.Dq0oGnZBHQ 00:15:15.307 01:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Dq0oGnZBHQ 00:15:15.307 01:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:15.566 [2024-05-15 01:02:27.727162] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:15.566 01:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:15.823 01:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:16.082 [2024-05-15 01:02:28.220402] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:16.082 [2024-05-15 01:02:28.220528] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:16.082 [2024-05-15 01:02:28.220725] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:16.082 01:02:28 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:16.082 malloc0 00:15:16.340 01:02:28 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:16.340 01:02:28 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Dq0oGnZBHQ 00:15:16.598 [2024-05-15 01:02:28.946851] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:16.598 01:02:28 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1258502 00:15:16.598 01:02:28 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:16.598 01:02:28 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:16.598 01:02:28 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1258502 /var/tmp/bdevperf.sock 00:15:16.598 01:02:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1258502 ']' 00:15:16.598 01:02:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:16.598 01:02:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:16.598 01:02:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:16.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:16.598 01:02:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:16.598 01:02:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:16.856 [2024-05-15 01:02:29.002425] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:15:16.856 [2024-05-15 01:02:29.002507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258502 ] 00:15:16.856 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.856 [2024-05-15 01:02:29.070055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.856 [2024-05-15 01:02:29.175627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.114 01:02:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:17.114 01:02:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:17.114 01:02:29 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Dq0oGnZBHQ 00:15:17.372 [2024-05-15 01:02:29.506550] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:17.372 [2024-05-15 01:02:29.506645] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:17.372 TLSTESTn1 00:15:17.372 01:02:29 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:15:17.631 01:02:29 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:15:17.631 "subsystems": [ 00:15:17.631 { 00:15:17.631 "subsystem": "keyring", 00:15:17.631 "config": [] 00:15:17.631 }, 00:15:17.631 { 00:15:17.631 "subsystem": "iobuf", 00:15:17.631 "config": [ 00:15:17.631 { 00:15:17.631 "method": "iobuf_set_options", 00:15:17.631 "params": { 00:15:17.631 "small_pool_count": 8192, 00:15:17.631 "large_pool_count": 1024, 00:15:17.631 "small_bufsize": 8192, 00:15:17.631 "large_bufsize": 135168 00:15:17.631 } 00:15:17.631 } 00:15:17.631 ] 00:15:17.631 }, 00:15:17.631 { 00:15:17.631 "subsystem": "sock", 00:15:17.631 "config": [ 00:15:17.631 { 00:15:17.631 "method": "sock_impl_set_options", 00:15:17.631 "params": { 00:15:17.631 "impl_name": "posix", 00:15:17.631 "recv_buf_size": 2097152, 00:15:17.631 "send_buf_size": 2097152, 00:15:17.631 "enable_recv_pipe": true, 00:15:17.631 "enable_quickack": false, 00:15:17.631 "enable_placement_id": 0, 00:15:17.631 "enable_zerocopy_send_server": true, 00:15:17.631 "enable_zerocopy_send_client": false, 00:15:17.631 "zerocopy_threshold": 0, 00:15:17.631 "tls_version": 0, 00:15:17.631 "enable_ktls": false 00:15:17.631 } 00:15:17.631 }, 00:15:17.631 { 00:15:17.631 "method": "sock_impl_set_options", 00:15:17.631 "params": { 00:15:17.631 "impl_name": "ssl", 00:15:17.631 "recv_buf_size": 4096, 00:15:17.631 "send_buf_size": 4096, 00:15:17.631 "enable_recv_pipe": true, 00:15:17.631 "enable_quickack": false, 00:15:17.631 "enable_placement_id": 0, 00:15:17.631 "enable_zerocopy_send_server": true, 00:15:17.631 "enable_zerocopy_send_client": false, 00:15:17.631 "zerocopy_threshold": 0, 00:15:17.631 "tls_version": 0, 00:15:17.631 "enable_ktls": false 00:15:17.631 } 00:15:17.631 } 00:15:17.631 ] 00:15:17.631 }, 00:15:17.631 { 00:15:17.631 "subsystem": "vmd", 00:15:17.631 "config": [] 00:15:17.631 }, 00:15:17.631 { 00:15:17.631 "subsystem": "accel", 00:15:17.631 "config": [ 00:15:17.631 { 00:15:17.632 "method": "accel_set_options", 00:15:17.632 "params": { 00:15:17.632 "small_cache_size": 128, 00:15:17.632 "large_cache_size": 16, 00:15:17.632 "task_count": 2048, 00:15:17.632 "sequence_count": 2048, 00:15:17.632 "buf_count": 2048 00:15:17.632 } 00:15:17.632 } 00:15:17.632 ] 00:15:17.632 }, 00:15:17.632 { 00:15:17.632 "subsystem": "bdev", 00:15:17.632 "config": [ 00:15:17.632 { 00:15:17.632 "method": "bdev_set_options", 00:15:17.632 "params": { 00:15:17.632 "bdev_io_pool_size": 65535, 00:15:17.632 "bdev_io_cache_size": 256, 00:15:17.632 "bdev_auto_examine": true, 00:15:17.632 "iobuf_small_cache_size": 128, 00:15:17.632 "iobuf_large_cache_size": 16 00:15:17.632 } 00:15:17.632 }, 00:15:17.632 { 00:15:17.632 "method": "bdev_raid_set_options", 00:15:17.632 "params": { 00:15:17.632 "process_window_size_kb": 1024 00:15:17.632 } 00:15:17.632 }, 00:15:17.632 { 00:15:17.632 "method": "bdev_iscsi_set_options", 00:15:17.632 "params": { 00:15:17.632 "timeout_sec": 30 00:15:17.632 } 00:15:17.632 }, 00:15:17.632 { 00:15:17.632 "method": "bdev_nvme_set_options", 00:15:17.632 "params": { 00:15:17.632 "action_on_timeout": "none", 00:15:17.632 "timeout_us": 0, 00:15:17.632 "timeout_admin_us": 0, 00:15:17.632 "keep_alive_timeout_ms": 10000, 00:15:17.632 "arbitration_burst": 0, 00:15:17.632 "low_priority_weight": 0, 00:15:17.632 "medium_priority_weight": 0, 00:15:17.632 "high_priority_weight": 0, 00:15:17.632 "nvme_adminq_poll_period_us": 10000, 00:15:17.632 "nvme_ioq_poll_period_us": 0, 00:15:17.632 "io_queue_requests": 0, 00:15:17.632 "delay_cmd_submit": true, 00:15:17.632 "transport_retry_count": 4, 00:15:17.632 "bdev_retry_count": 3, 00:15:17.632 "transport_ack_timeout": 0, 00:15:17.632 "ctrlr_loss_timeout_sec": 0, 00:15:17.632 "reconnect_delay_sec": 0, 00:15:17.632 "fast_io_fail_timeout_sec": 0, 00:15:17.632 "disable_auto_failback": false, 00:15:17.632 "generate_uuids": false, 00:15:17.632 "transport_tos": 0, 00:15:17.632 "nvme_error_stat": false, 00:15:17.632 "rdma_srq_size": 0, 00:15:17.632 "io_path_stat": false, 00:15:17.632 "allow_accel_sequence": false, 00:15:17.632 "rdma_max_cq_size": 0, 00:15:17.632 "rdma_cm_event_timeout_ms": 0, 00:15:17.632 "dhchap_digests": [ 00:15:17.632 "sha256", 00:15:17.632 "sha384", 00:15:17.632 "sha512" 00:15:17.632 ], 00:15:17.632 "dhchap_dhgroups": [ 00:15:17.632 "null", 00:15:17.632 "ffdhe2048", 00:15:17.632 "ffdhe3072", 00:15:17.632 "ffdhe4096", 00:15:17.632 "ffdhe6144", 00:15:17.632 "ffdhe8192" 00:15:17.632 ] 00:15:17.632 } 00:15:17.632 }, 00:15:17.632 { 00:15:17.632 "method": "bdev_nvme_set_hotplug", 00:15:17.632 "params": { 00:15:17.632 "period_us": 100000, 00:15:17.632 "enable": false 00:15:17.632 } 00:15:17.632 }, 00:15:17.632 { 00:15:17.632 "method": "bdev_malloc_create", 00:15:17.632 "params": { 00:15:17.632 "name": "malloc0", 00:15:17.632 "num_blocks": 8192, 00:15:17.632 "block_size": 4096, 00:15:17.632 "physical_block_size": 4096, 00:15:17.632 "uuid": "611bd1ce-8ec5-4a77-9f18-87b58f57234c", 00:15:17.632 "optimal_io_boundary": 0 00:15:17.632 } 00:15:17.632 }, 00:15:17.632 { 00:15:17.632 "method": "bdev_wait_for_examine" 00:15:17.632 } 00:15:17.632 ] 00:15:17.632 }, 00:15:17.632 { 00:15:17.632 "subsystem": "nbd", 00:15:17.632 "config": [] 00:15:17.632 }, 00:15:17.632 { 00:15:17.632 "subsystem": "scheduler", 00:15:17.632 "config": [ 00:15:17.632 { 00:15:17.632 "method": "framework_set_scheduler", 00:15:17.632 "params": { 00:15:17.632 "name": "static" 00:15:17.632 } 00:15:17.632 } 00:15:17.632 ] 00:15:17.632 }, 00:15:17.632 { 00:15:17.632 "subsystem": "nvmf", 00:15:17.632 "config": [ 00:15:17.632 { 00:15:17.632 "method": "nvmf_set_config", 00:15:17.632 "params": { 00:15:17.632 "discovery_filter": "match_any", 00:15:17.632 "admin_cmd_passthru": { 00:15:17.632 "identify_ctrlr": false 00:15:17.632 } 00:15:17.632 } 00:15:17.632 }, 00:15:17.632 { 00:15:17.632 "method": "nvmf_set_max_subsystems", 00:15:17.632 "params": { 00:15:17.632 "max_subsystems": 1024 00:15:17.632 } 00:15:17.632 }, 00:15:17.632 { 00:15:17.632 "method": "nvmf_set_crdt", 00:15:17.632 "params": { 00:15:17.632 "crdt1": 0, 00:15:17.632 "crdt2": 0, 00:15:17.632 "crdt3": 0 00:15:17.632 } 00:15:17.632 }, 00:15:17.632 { 00:15:17.632 "method": "nvmf_create_transport", 00:15:17.632 "params": { 00:15:17.632 "trtype": "TCP", 00:15:17.632 "max_queue_depth": 128, 00:15:17.632 "max_io_qpairs_per_ctrlr": 127, 00:15:17.632 "in_capsule_data_size": 4096, 00:15:17.632 "max_io_size": 131072, 00:15:17.632 "io_unit_size": 131072, 00:15:17.632 "max_aq_depth": 128, 00:15:17.632 "num_shared_buffers": 511, 00:15:17.632 "buf_cache_size": 4294967295, 00:15:17.632 "dif_insert_or_strip": false, 00:15:17.632 "zcopy": false, 00:15:17.632 "c2h_success": false, 00:15:17.632 "sock_priority": 0, 00:15:17.632 "abort_timeout_sec": 1, 00:15:17.632 "ack_timeout": 0, 00:15:17.632 "data_wr_pool_size": 0 00:15:17.632 } 00:15:17.632 }, 00:15:17.632 { 00:15:17.632 "method": "nvmf_create_subsystem", 00:15:17.632 "params": { 00:15:17.632 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:17.632 "allow_any_host": false, 00:15:17.632 "serial_number": "SPDK00000000000001", 00:15:17.632 "model_number": "SPDK bdev Controller", 00:15:17.632 "max_namespaces": 10, 00:15:17.632 "min_cntlid": 1, 00:15:17.632 "max_cntlid": 65519, 00:15:17.632 "ana_reporting": false 00:15:17.632 } 00:15:17.632 }, 00:15:17.632 { 00:15:17.632 "method": "nvmf_subsystem_add_host", 00:15:17.632 "params": { 00:15:17.632 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:17.632 "host": "nqn.2016-06.io.spdk:host1", 00:15:17.632 "psk": "/tmp/tmp.Dq0oGnZBHQ" 00:15:17.632 } 00:15:17.632 }, 00:15:17.632 { 00:15:17.632 "method": "nvmf_subsystem_add_ns", 00:15:17.632 "params": { 00:15:17.632 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:17.632 "namespace": { 00:15:17.632 "nsid": 1, 00:15:17.632 "bdev_name": "malloc0", 00:15:17.632 "nguid": "611BD1CE8EC54A779F1887B58F57234C", 00:15:17.632 "uuid": "611bd1ce-8ec5-4a77-9f18-87b58f57234c", 00:15:17.632 "no_auto_visible": false 00:15:17.632 } 00:15:17.632 } 00:15:17.632 }, 00:15:17.632 { 00:15:17.632 "method": "nvmf_subsystem_add_listener", 00:15:17.632 "params": { 00:15:17.632 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:17.632 "listen_address": { 00:15:17.632 "trtype": "TCP", 00:15:17.632 "adrfam": "IPv4", 00:15:17.632 "traddr": "10.0.0.2", 00:15:17.632 "trsvcid": "4420" 00:15:17.632 }, 00:15:17.632 "secure_channel": true 00:15:17.632 } 00:15:17.632 } 00:15:17.632 ] 00:15:17.632 } 00:15:17.632 ] 00:15:17.632 }' 00:15:17.632 01:02:29 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:17.891 01:02:30 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:15:17.891 "subsystems": [ 00:15:17.891 { 00:15:17.891 "subsystem": "keyring", 00:15:17.891 "config": [] 00:15:17.891 }, 00:15:17.891 { 00:15:17.891 "subsystem": "iobuf", 00:15:17.891 "config": [ 00:15:17.891 { 00:15:17.891 "method": "iobuf_set_options", 00:15:17.891 "params": { 00:15:17.891 "small_pool_count": 8192, 00:15:17.891 "large_pool_count": 1024, 00:15:17.891 "small_bufsize": 8192, 00:15:17.891 "large_bufsize": 135168 00:15:17.891 } 00:15:17.891 } 00:15:17.891 ] 00:15:17.891 }, 00:15:17.891 { 00:15:17.891 "subsystem": "sock", 00:15:17.891 "config": [ 00:15:17.891 { 00:15:17.891 "method": "sock_impl_set_options", 00:15:17.891 "params": { 00:15:17.891 "impl_name": "posix", 00:15:17.891 "recv_buf_size": 2097152, 00:15:17.891 "send_buf_size": 2097152, 00:15:17.891 "enable_recv_pipe": true, 00:15:17.891 "enable_quickack": false, 00:15:17.891 "enable_placement_id": 0, 00:15:17.891 "enable_zerocopy_send_server": true, 00:15:17.891 "enable_zerocopy_send_client": false, 00:15:17.891 "zerocopy_threshold": 0, 00:15:17.891 "tls_version": 0, 00:15:17.891 "enable_ktls": false 00:15:17.891 } 00:15:17.891 }, 00:15:17.891 { 00:15:17.891 "method": "sock_impl_set_options", 00:15:17.891 "params": { 00:15:17.891 "impl_name": "ssl", 00:15:17.891 "recv_buf_size": 4096, 00:15:17.891 "send_buf_size": 4096, 00:15:17.891 "enable_recv_pipe": true, 00:15:17.891 "enable_quickack": false, 00:15:17.891 "enable_placement_id": 0, 00:15:17.891 "enable_zerocopy_send_server": true, 00:15:17.891 "enable_zerocopy_send_client": false, 00:15:17.891 "zerocopy_threshold": 0, 00:15:17.891 "tls_version": 0, 00:15:17.891 "enable_ktls": false 00:15:17.891 } 00:15:17.891 } 00:15:17.891 ] 00:15:17.891 }, 00:15:17.891 { 00:15:17.891 "subsystem": "vmd", 00:15:17.891 "config": [] 00:15:17.891 }, 00:15:17.891 { 00:15:17.891 "subsystem": "accel", 00:15:17.891 "config": [ 00:15:17.891 { 00:15:17.891 "method": "accel_set_options", 00:15:17.891 "params": { 00:15:17.891 "small_cache_size": 128, 00:15:17.891 "large_cache_size": 16, 00:15:17.891 "task_count": 2048, 00:15:17.891 "sequence_count": 2048, 00:15:17.891 "buf_count": 2048 00:15:17.891 } 00:15:17.891 } 00:15:17.891 ] 00:15:17.891 }, 00:15:17.891 { 00:15:17.891 "subsystem": "bdev", 00:15:17.891 "config": [ 00:15:17.891 { 00:15:17.891 "method": "bdev_set_options", 00:15:17.891 "params": { 00:15:17.891 "bdev_io_pool_size": 65535, 00:15:17.891 "bdev_io_cache_size": 256, 00:15:17.891 "bdev_auto_examine": true, 00:15:17.891 "iobuf_small_cache_size": 128, 00:15:17.891 "iobuf_large_cache_size": 16 00:15:17.891 } 00:15:17.891 }, 00:15:17.891 { 00:15:17.891 "method": "bdev_raid_set_options", 00:15:17.891 "params": { 00:15:17.891 "process_window_size_kb": 1024 00:15:17.891 } 00:15:17.891 }, 00:15:17.891 { 00:15:17.891 "method": "bdev_iscsi_set_options", 00:15:17.891 "params": { 00:15:17.891 "timeout_sec": 30 00:15:17.891 } 00:15:17.891 }, 00:15:17.891 { 00:15:17.891 "method": "bdev_nvme_set_options", 00:15:17.891 "params": { 00:15:17.891 "action_on_timeout": "none", 00:15:17.891 "timeout_us": 0, 00:15:17.891 "timeout_admin_us": 0, 00:15:17.891 "keep_alive_timeout_ms": 10000, 00:15:17.891 "arbitration_burst": 0, 00:15:17.891 "low_priority_weight": 0, 00:15:17.891 "medium_priority_weight": 0, 00:15:17.891 "high_priority_weight": 0, 00:15:17.891 "nvme_adminq_poll_period_us": 10000, 00:15:17.891 "nvme_ioq_poll_period_us": 0, 00:15:17.891 "io_queue_requests": 512, 00:15:17.892 "delay_cmd_submit": true, 00:15:17.892 "transport_retry_count": 4, 00:15:17.892 "bdev_retry_count": 3, 00:15:17.892 "transport_ack_timeout": 0, 00:15:17.892 "ctrlr_loss_timeout_sec": 0, 00:15:17.892 "reconnect_delay_sec": 0, 00:15:17.892 "fast_io_fail_timeout_sec": 0, 00:15:17.892 "disable_auto_failback": false, 00:15:17.892 "generate_uuids": false, 00:15:17.892 "transport_tos": 0, 00:15:17.892 "nvme_error_stat": false, 00:15:17.892 "rdma_srq_size": 0, 00:15:17.892 "io_path_stat": false, 00:15:17.892 "allow_accel_sequence": false, 00:15:17.892 "rdma_max_cq_size": 0, 00:15:17.892 "rdma_cm_event_timeout_ms": 0, 00:15:17.892 "dhchap_digests": [ 00:15:17.892 "sha256", 00:15:17.892 "sha384", 00:15:17.892 "sha512" 00:15:17.892 ], 00:15:17.892 "dhchap_dhgroups": [ 00:15:17.892 "null", 00:15:17.892 "ffdhe2048", 00:15:17.892 "ffdhe3072", 00:15:17.892 "ffdhe4096", 00:15:17.892 "ffdhe6144", 00:15:17.892 "ffdhe8192" 00:15:17.892 ] 00:15:17.892 } 00:15:17.892 }, 00:15:17.892 { 00:15:17.892 "method": "bdev_nvme_attach_controller", 00:15:17.892 "params": { 00:15:17.892 "name": "TLSTEST", 00:15:17.892 "trtype": "TCP", 00:15:17.892 "adrfam": "IPv4", 00:15:17.892 "traddr": "10.0.0.2", 00:15:17.892 "trsvcid": "4420", 00:15:17.892 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:17.892 "prchk_reftag": false, 00:15:17.892 "prchk_guard": false, 00:15:17.892 "ctrlr_loss_timeout_sec": 0, 00:15:17.892 "reconnect_delay_sec": 0, 00:15:17.892 "fast_io_fail_timeout_sec": 0, 00:15:17.892 "psk": "/tmp/tmp.Dq0oGnZBHQ", 00:15:17.892 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:17.892 "hdgst": false, 00:15:17.892 "ddgst": false 00:15:17.892 } 00:15:17.892 }, 00:15:17.892 { 00:15:17.892 "method": "bdev_nvme_set_hotplug", 00:15:17.892 "params": { 00:15:17.892 "period_us": 100000, 00:15:17.892 "enable": false 00:15:17.892 } 00:15:17.892 }, 00:15:17.892 { 00:15:17.892 "method": "bdev_wait_for_examine" 00:15:17.892 } 00:15:17.892 ] 00:15:17.892 }, 00:15:17.892 { 00:15:17.892 "subsystem": "nbd", 00:15:17.892 "config": [] 00:15:17.892 } 00:15:17.892 ] 00:15:17.892 }' 00:15:17.892 01:02:30 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1258502 00:15:17.892 01:02:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1258502 ']' 00:15:17.892 01:02:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1258502 00:15:17.892 01:02:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:17.892 01:02:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:17.892 01:02:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1258502 00:15:17.892 01:02:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:15:17.892 01:02:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:15:17.892 01:02:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1258502' 00:15:17.892 killing process with pid 1258502 00:15:17.892 01:02:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1258502 00:15:17.892 Received shutdown signal, test time was about 10.000000 seconds 00:15:17.892 00:15:17.892 Latency(us) 00:15:17.892 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.892 =================================================================================================================== 00:15:17.892 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:17.892 [2024-05-15 01:02:30.258254] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:17.892 01:02:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1258502 00:15:18.149 01:02:30 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1258098 00:15:18.149 01:02:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1258098 ']' 00:15:18.149 01:02:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1258098 00:15:18.149 01:02:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:18.149 01:02:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:18.149 01:02:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1258098 00:15:18.407 01:02:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:18.407 01:02:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:18.407 01:02:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1258098' 00:15:18.407 killing process with pid 1258098 00:15:18.407 01:02:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1258098 00:15:18.407 [2024-05-15 01:02:30.551392] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:18.407 [2024-05-15 01:02:30.551447] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:18.407 01:02:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1258098 00:15:18.666 01:02:30 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:18.666 01:02:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:18.666 01:02:30 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:15:18.666 "subsystems": [ 00:15:18.666 { 00:15:18.666 "subsystem": "keyring", 00:15:18.666 "config": [] 00:15:18.666 }, 00:15:18.666 { 00:15:18.666 "subsystem": "iobuf", 00:15:18.666 "config": [ 00:15:18.666 { 00:15:18.666 "method": "iobuf_set_options", 00:15:18.666 "params": { 00:15:18.666 "small_pool_count": 8192, 00:15:18.666 "large_pool_count": 1024, 00:15:18.666 "small_bufsize": 8192, 00:15:18.666 "large_bufsize": 135168 00:15:18.666 } 00:15:18.666 } 00:15:18.666 ] 00:15:18.666 }, 00:15:18.666 { 00:15:18.666 "subsystem": "sock", 00:15:18.666 "config": [ 00:15:18.666 { 00:15:18.666 "method": "sock_impl_set_options", 00:15:18.666 "params": { 00:15:18.666 "impl_name": "posix", 00:15:18.666 "recv_buf_size": 2097152, 00:15:18.666 "send_buf_size": 2097152, 00:15:18.666 "enable_recv_pipe": true, 00:15:18.666 "enable_quickack": false, 00:15:18.666 "enable_placement_id": 0, 00:15:18.666 "enable_zerocopy_send_server": true, 00:15:18.666 "enable_zerocopy_send_client": false, 00:15:18.666 "zerocopy_threshold": 0, 00:15:18.666 "tls_version": 0, 00:15:18.666 "enable_ktls": false 00:15:18.666 } 00:15:18.666 }, 00:15:18.666 { 00:15:18.666 "method": "sock_impl_set_options", 00:15:18.666 "params": { 00:15:18.666 "impl_name": "ssl", 00:15:18.666 "recv_buf_size": 4096, 00:15:18.666 "send_buf_size": 4096, 00:15:18.666 "enable_recv_pipe": true, 00:15:18.666 "enable_quickack": false, 00:15:18.666 "enable_placement_id": 0, 00:15:18.666 "enable_zerocopy_send_server": true, 00:15:18.666 "enable_zerocopy_send_client": false, 00:15:18.666 "zerocopy_threshold": 0, 00:15:18.666 "tls_version": 0, 00:15:18.666 "enable_ktls": false 00:15:18.666 } 00:15:18.666 } 00:15:18.666 ] 00:15:18.666 }, 00:15:18.666 { 00:15:18.666 "subsystem": "vmd", 00:15:18.666 "config": [] 00:15:18.666 }, 00:15:18.666 { 00:15:18.666 "subsystem": "accel", 00:15:18.666 "config": [ 00:15:18.666 { 00:15:18.666 "method": "accel_set_options", 00:15:18.666 "params": { 00:15:18.666 "small_cache_size": 128, 00:15:18.666 "large_cache_size": 16, 00:15:18.666 "task_count": 2048, 00:15:18.666 "sequence_count": 2048, 00:15:18.666 "buf_count": 2048 00:15:18.666 } 00:15:18.666 } 00:15:18.666 ] 00:15:18.666 }, 00:15:18.666 { 00:15:18.666 "subsystem": "bdev", 00:15:18.666 "config": [ 00:15:18.666 { 00:15:18.666 "method": "bdev_set_options", 00:15:18.666 "params": { 00:15:18.666 "bdev_io_pool_size": 65535, 00:15:18.666 "bdev_io_cache_size": 256, 00:15:18.666 "bdev_auto_examine": true, 00:15:18.666 "iobuf_small_cache_size": 128, 00:15:18.666 "iobuf_large_cache_size": 16 00:15:18.666 } 00:15:18.666 }, 00:15:18.666 { 00:15:18.666 "method": "bdev_raid_set_options", 00:15:18.666 "params": { 00:15:18.666 "process_window_size_kb": 1024 00:15:18.666 } 00:15:18.666 }, 00:15:18.666 { 00:15:18.666 "method": "bdev_iscsi_set_options", 00:15:18.666 "params": { 00:15:18.666 "timeout_sec": 30 00:15:18.666 } 00:15:18.666 }, 00:15:18.666 { 00:15:18.666 "method": "bdev_nvme_set_options", 00:15:18.666 "params": { 00:15:18.666 "action_on_timeout": "none", 00:15:18.666 "timeout_us": 0, 00:15:18.666 "timeout_admin_us": 0, 00:15:18.666 "keep_alive_timeout_ms": 10000, 00:15:18.666 "arbitration_burst": 0, 00:15:18.666 "low_priority_weight": 0, 00:15:18.666 "medium_priority_weight": 0, 00:15:18.666 "high_priority_weight": 0, 00:15:18.666 "nvme_adminq_poll_period_us": 10000, 00:15:18.666 "nvme_ioq_poll_period_us": 0, 00:15:18.666 "io_queue_requests": 0, 00:15:18.666 "delay_cmd_submit": true, 00:15:18.666 "transport_retry_count": 4, 00:15:18.666 "bdev_retry_count": 3, 00:15:18.666 "transport_ack_timeout": 0, 00:15:18.666 "ctrlr_loss_timeout_sec": 0, 00:15:18.666 "reconnect_delay_sec": 0, 00:15:18.666 "fast_io_fail_timeout_sec": 0, 00:15:18.666 "disable_auto_failback": false, 00:15:18.666 "generate_uuids": false, 00:15:18.666 "transport_tos": 0, 00:15:18.666 "nvme_error_stat": false, 00:15:18.666 "rdma_srq_size": 0, 00:15:18.666 "io_path_stat": false, 00:15:18.666 "allow_accel_sequence": false, 00:15:18.666 "rdma_max_cq_size": 0, 00:15:18.666 "rdma_cm_event_timeout_ms": 0, 00:15:18.666 "dhchap_digests": [ 00:15:18.666 "sha256", 00:15:18.666 "sha384", 00:15:18.666 "sha512" 00:15:18.666 ], 00:15:18.666 "dhchap_dhgroups": [ 00:15:18.666 "null", 00:15:18.666 "ffdhe2048", 00:15:18.666 "ffdhe3072", 00:15:18.666 "ffdhe4096", 00:15:18.666 "ffdhe6144", 00:15:18.666 "ffdhe8192" 00:15:18.666 ] 00:15:18.666 } 00:15:18.666 }, 00:15:18.666 { 00:15:18.666 "method": "bdev_nvme_set_hotplug", 00:15:18.666 "params": { 00:15:18.666 "period_us": 100000, 00:15:18.666 "enable": false 00:15:18.666 } 00:15:18.666 }, 00:15:18.666 { 00:15:18.666 "method": "bdev_malloc_create", 00:15:18.666 "params": { 00:15:18.666 "name": "malloc0", 00:15:18.666 "num_blocks": 8192, 00:15:18.666 "block_size": 4096, 00:15:18.666 "physical_block_size": 4096, 00:15:18.666 "uuid": "611bd1ce-8ec5-4a77-9f18-87b58f57234c", 00:15:18.666 "optimal_io_boundary": 0 00:15:18.666 } 00:15:18.666 }, 00:15:18.666 { 00:15:18.666 "method": "bdev_wait_for_examine" 00:15:18.666 } 00:15:18.666 ] 00:15:18.666 }, 00:15:18.666 { 00:15:18.666 "subsystem": "nbd", 00:15:18.666 "config": [] 00:15:18.666 }, 00:15:18.666 { 00:15:18.666 "subsystem": "scheduler", 00:15:18.666 "config": [ 00:15:18.666 { 00:15:18.666 "method": "framework_set_scheduler", 00:15:18.666 "params": { 00:15:18.666 "name": "static" 00:15:18.666 } 00:15:18.666 } 00:15:18.666 ] 00:15:18.666 }, 00:15:18.666 { 00:15:18.666 "subsystem": "nvmf", 00:15:18.666 "config": [ 00:15:18.666 { 00:15:18.666 "method": "nvmf_set_config", 00:15:18.666 "params": { 00:15:18.666 "discovery_filter": "match_any", 00:15:18.666 "admin_cmd_passthru": { 00:15:18.666 "identify_ctrlr": false 00:15:18.666 } 00:15:18.666 } 00:15:18.666 }, 00:15:18.666 { 00:15:18.666 "method": "nvmf_set_max_subsystems", 00:15:18.666 "params": { 00:15:18.666 "max_subsystems": 1024 00:15:18.666 } 00:15:18.666 }, 00:15:18.666 { 00:15:18.666 "method": "nvmf_set_crdt", 00:15:18.666 "params": { 00:15:18.666 "crdt1": 0, 00:15:18.666 "crdt2": 0, 00:15:18.666 "crdt3": 0 00:15:18.666 } 00:15:18.666 }, 00:15:18.666 { 00:15:18.666 "method": "nvmf_create_transport", 00:15:18.666 "params": { 00:15:18.666 "trtype": "TCP", 00:15:18.666 "max_queue_depth": 128, 00:15:18.666 "max_io_qpairs_per_ctrlr": 127, 00:15:18.666 "in_capsule_data_size": 4096, 00:15:18.666 "max_io_size": 131072, 00:15:18.666 "io_unit_size": 131072, 00:15:18.666 "max_aq_depth": 128, 00:15:18.666 "num_shared_buffers": 511, 00:15:18.666 "buf_cache_size": 4294967295, 00:15:18.666 "dif_insert_or_strip": false, 00:15:18.666 "zcopy": false, 00:15:18.666 "c2h_success": false, 00:15:18.667 "sock_priority": 0, 00:15:18.667 "abort_timeout_sec": 1, 00:15:18.667 "ack_timeout": 0, 00:15:18.667 "data_wr_pool_size": 0 00:15:18.667 } 00:15:18.667 }, 00:15:18.667 { 00:15:18.667 "method": "nvmf_create_subsystem", 00:15:18.667 "params": { 00:15:18.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.667 "allow_any_host": false, 00:15:18.667 "serial_number": "SPDK00000000000001", 00:15:18.667 "model_number": "SPDK bdev Controller", 00:15:18.667 "max_namespaces": 10, 00:15:18.667 "min_cntlid": 1, 00:15:18.667 "max_cntlid": 65519, 00:15:18.667 "ana_reporting": false 00:15:18.667 } 00:15:18.667 }, 00:15:18.667 { 00:15:18.667 "method": "nvmf_subsystem_add_host", 00:15:18.667 "params": { 00:15:18.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.667 "host": "nqn.2016-06.io.spdk:host1", 00:15:18.667 "psk": "/tmp/tmp.Dq0oGnZBHQ" 00:15:18.667 } 00:15:18.667 }, 00:15:18.667 { 00:15:18.667 "method": "nvmf_subsystem_add_ns", 00:15:18.667 "params": { 00:15:18.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.667 "namespace": { 00:15:18.667 "nsid": 1, 00:15:18.667 "bdev_name": "malloc0", 00:15:18.667 "nguid": "611BD1CE8EC54A779F1887B58F57234C", 00:15:18.667 "uuid": "611bd1ce-8ec5-4a77-9f18-87b58f57234c", 00:15:18.667 "no_auto_visible": false 00:15:18.667 } 00:15:18.667 } 00:15:18.667 }, 00:15:18.667 { 00:15:18.667 "method": "nvmf_subsystem_add_listener", 00:15:18.667 "params": { 00:15:18.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.667 "listen_address": { 00:15:18.667 "trtype": "TCP", 00:15:18.667 "adrfam": "IPv4", 00:15:18.667 "traddr": "10.0.0.2", 00:15:18.667 "trsvcid": "4420" 00:15:18.667 }, 00:15:18.667 "secure_channel": true 00:15:18.667 } 00:15:18.667 } 00:15:18.667 ] 00:15:18.667 } 00:15:18.667 ] 00:15:18.667 }' 00:15:18.667 01:02:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:18.667 01:02:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:18.667 01:02:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1258669 00:15:18.667 01:02:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:18.667 01:02:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1258669 00:15:18.667 01:02:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1258669 ']' 00:15:18.667 01:02:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.667 01:02:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:18.667 01:02:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.667 01:02:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:18.667 01:02:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:18.667 [2024-05-15 01:02:30.889331] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:15:18.667 [2024-05-15 01:02:30.889411] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.667 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.667 [2024-05-15 01:02:30.963647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.925 [2024-05-15 01:02:31.074274] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:18.925 [2024-05-15 01:02:31.074320] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:18.925 [2024-05-15 01:02:31.074333] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:18.925 [2024-05-15 01:02:31.074344] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:18.925 [2024-05-15 01:02:31.074353] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:18.925 [2024-05-15 01:02:31.074425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.925 [2024-05-15 01:02:31.305338] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.182 [2024-05-15 01:02:31.321276] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:19.183 [2024-05-15 01:02:31.337300] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:19.183 [2024-05-15 01:02:31.337380] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:19.183 [2024-05-15 01:02:31.349134] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.441 01:02:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:19.441 01:02:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:19.441 01:02:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:19.441 01:02:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:19.441 01:02:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:19.699 01:02:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.699 01:02:31 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1258817 00:15:19.699 01:02:31 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1258817 /var/tmp/bdevperf.sock 00:15:19.699 01:02:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1258817 ']' 00:15:19.699 01:02:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:19.699 01:02:31 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:19.699 01:02:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:19.699 01:02:31 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:15:19.699 "subsystems": [ 00:15:19.699 { 00:15:19.699 "subsystem": "keyring", 00:15:19.699 "config": [] 00:15:19.699 }, 00:15:19.699 { 00:15:19.700 "subsystem": "iobuf", 00:15:19.700 "config": [ 00:15:19.700 { 00:15:19.700 "method": "iobuf_set_options", 00:15:19.700 "params": { 00:15:19.700 "small_pool_count": 8192, 00:15:19.700 "large_pool_count": 1024, 00:15:19.700 "small_bufsize": 8192, 00:15:19.700 "large_bufsize": 135168 00:15:19.700 } 00:15:19.700 } 00:15:19.700 ] 00:15:19.700 }, 00:15:19.700 { 00:15:19.700 "subsystem": "sock", 00:15:19.700 "config": [ 00:15:19.700 { 00:15:19.700 "method": "sock_impl_set_options", 00:15:19.700 "params": { 00:15:19.700 "impl_name": "posix", 00:15:19.700 "recv_buf_size": 2097152, 00:15:19.700 "send_buf_size": 2097152, 00:15:19.700 "enable_recv_pipe": true, 00:15:19.700 "enable_quickack": false, 00:15:19.700 "enable_placement_id": 0, 00:15:19.700 "enable_zerocopy_send_server": true, 00:15:19.700 "enable_zerocopy_send_client": false, 00:15:19.700 "zerocopy_threshold": 0, 00:15:19.700 "tls_version": 0, 00:15:19.700 "enable_ktls": false 00:15:19.700 } 00:15:19.700 }, 00:15:19.700 { 00:15:19.700 "method": "sock_impl_set_options", 00:15:19.700 "params": { 00:15:19.700 "impl_name": "ssl", 00:15:19.700 "recv_buf_size": 4096, 00:15:19.700 "send_buf_size": 4096, 00:15:19.700 "enable_recv_pipe": true, 00:15:19.700 "enable_quickack": false, 00:15:19.700 "enable_placement_id": 0, 00:15:19.700 "enable_zerocopy_send_server": true, 00:15:19.700 "enable_zerocopy_send_client": false, 00:15:19.700 "zerocopy_threshold": 0, 00:15:19.700 "tls_version": 0, 00:15:19.700 "enable_ktls": false 00:15:19.700 } 00:15:19.700 } 00:15:19.700 ] 00:15:19.700 }, 00:15:19.700 { 00:15:19.700 "subsystem": "vmd", 00:15:19.700 "config": [] 00:15:19.700 }, 00:15:19.700 { 00:15:19.700 "subsystem": "accel", 00:15:19.700 "config": [ 00:15:19.700 { 00:15:19.700 "method": "accel_set_options", 00:15:19.700 "params": { 00:15:19.700 "small_cache_size": 128, 00:15:19.700 "large_cache_size": 16, 00:15:19.700 "task_count": 2048, 00:15:19.700 "sequence_count": 2048, 00:15:19.700 "buf_count": 2048 00:15:19.700 } 00:15:19.700 } 00:15:19.700 ] 00:15:19.700 }, 00:15:19.700 { 00:15:19.700 "subsystem": "bdev", 00:15:19.700 "config": [ 00:15:19.700 { 00:15:19.700 "method": "bdev_set_options", 00:15:19.700 "params": { 00:15:19.700 "bdev_io_pool_size": 65535, 00:15:19.700 "bdev_io_cache_size": 256, 00:15:19.700 "bdev_auto_examine": true, 00:15:19.700 "iobuf_small_cache_size": 128, 00:15:19.700 "iobuf_large_cache_size": 16 00:15:19.700 } 00:15:19.700 }, 00:15:19.700 { 00:15:19.700 "method": "bdev_raid_set_options", 00:15:19.700 "params": { 00:15:19.700 "process_window_size_kb": 1024 00:15:19.700 } 00:15:19.700 }, 00:15:19.700 { 00:15:19.700 "method": "bdev_iscsi_set_options", 00:15:19.700 "params": { 00:15:19.700 "timeout_sec": 30 00:15:19.700 } 00:15:19.700 }, 00:15:19.700 { 00:15:19.700 "method": "bdev_nvme_set_options", 00:15:19.700 "params": { 00:15:19.700 "action_on_timeout": "none", 00:15:19.700 "timeout_us": 0, 00:15:19.700 "timeout_admin_us": 0, 00:15:19.700 "keep_alive_timeout_ms": 10000, 00:15:19.700 "arbitration_burst": 0, 00:15:19.700 "low_priority_weight": 0, 00:15:19.700 "medium_priority_weight": 0, 00:15:19.700 "high_priority_weight": 0, 00:15:19.700 "nvme_adminq_poll_period_us": 10000, 00:15:19.700 "nvme_ioq_poll_period_us": 0, 00:15:19.700 "io_queue_requests": 512, 00:15:19.700 "delay_cmd_submit": true, 00:15:19.700 "transport_retry_count": 4, 00:15:19.700 "bdev_retry_count": 3, 00:15:19.700 "transport_ack_timeout": 0, 00:15:19.700 "ctrlr_loss_timeout_sec": 0, 00:15:19.700 "reconnect_delay_sec": 0, 00:15:19.700 "fast_io_fail_timeout_sec": 0, 00:15:19.700 "disable_auto_failback": false, 00:15:19.700 "generate_uuids": false, 00:15:19.700 "transport_tos": 0, 00:15:19.700 "nvme_error_stat": false, 00:15:19.700 "rdma_srq_size": 0, 00:15:19.700 "io_path_stat": false, 00:15:19.700 "allow_accel_sequence": false, 00:15:19.700 "rdma_max_cq_size": 0, 00:15:19.700 "rdma_cm_event_timeout_ms": 0, 00:15:19.700 "dhchap_digests": [ 00:15:19.700 "sha256", 00:15:19.700 "sha384", 00:15:19.700 "sha512" 00:15:19.700 ], 00:15:19.700 "dhchap_dhgroups": [ 00:15:19.700 "null", 00:15:19.700 "ffdhe2048", 00:15:19.700 "ffdhe3072", 00:15:19.700 "ffdhe4096", 00:15:19.700 "ffdhe6144", 00:15:19.700 "ffdhe8192" 00:15:19.700 ] 00:15:19.700 } 00:15:19.700 }, 00:15:19.700 { 00:15:19.700 "method": "bdev_nvme_attach_controller", 00:15:19.700 01:02:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:19.700 "params": { 00:15:19.700 "name": "TLSTEST", 00:15:19.700 "trtype": "TCP", 00:15:19.700 "adrfam": "IPv4", 00:15:19.700 "traddr": "10.0.0.2", 00:15:19.700 "trsvcid": "4420", 00:15:19.700 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:19.700 "prchk_reftag": false, 00:15:19.700 "prchk_guard": false, 00:15:19.700 "ctrlr_loss_timeout_sec": 0, 00:15:19.700 "reconnect_delay_sec": 0, 00:15:19.700 "fast_io_fail_timeout_sec": 0, 00:15:19.700 "psk": "/tmp/tmp.Dq0oGnZBHQ", 00:15:19.700 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:19.700 "hdgst": false, 00:15:19.700 "ddgst": false 00:15:19.700 } 00:15:19.700 }, 00:15:19.700 { 00:15:19.700 "method": "bdev_nvme_set_hotplug", 00:15:19.700 "params": { 00:15:19.700 "period_us": 100000, 00:15:19.700 "enable": false 00:15:19.700 } 00:15:19.700 }, 00:15:19.700 { 00:15:19.700 "method": "bdev_wait_for_examine" 00:15:19.700 } 00:15:19.700 ] 00:15:19.700 }, 00:15:19.700 { 00:15:19.700 "subsystem": "nbd", 00:15:19.700 "config": [] 00:15:19.700 } 00:15:19.700 ] 00:15:19.700 }' 00:15:19.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:19.700 01:02:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:19.700 01:02:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:19.700 [2024-05-15 01:02:31.901087] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:15:19.700 [2024-05-15 01:02:31.901171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258817 ] 00:15:19.700 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.700 [2024-05-15 01:02:31.968293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.700 [2024-05-15 01:02:32.073451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:19.958 [2024-05-15 01:02:32.234592] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:19.958 [2024-05-15 01:02:32.234723] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:20.522 01:02:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:20.522 01:02:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:20.522 01:02:32 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:20.778 Running I/O for 10 seconds... 00:15:30.735 00:15:30.735 Latency(us) 00:15:30.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.735 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:30.735 Verification LBA range: start 0x0 length 0x2000 00:15:30.735 TLSTESTn1 : 10.06 809.65 3.16 0.00 0.00 157704.89 10437.21 168548.88 00:15:30.735 =================================================================================================================== 00:15:30.735 Total : 809.65 3.16 0.00 0.00 157704.89 10437.21 168548.88 00:15:30.735 0 00:15:30.735 01:02:43 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:30.735 01:02:43 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1258817 00:15:30.735 01:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1258817 ']' 00:15:30.735 01:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1258817 00:15:30.735 01:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:30.735 01:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:30.735 01:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1258817 00:15:30.735 01:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:15:30.735 01:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:15:30.735 01:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1258817' 00:15:30.735 killing process with pid 1258817 00:15:30.735 01:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1258817 00:15:30.735 Received shutdown signal, test time was about 10.000000 seconds 00:15:30.735 00:15:30.735 Latency(us) 00:15:30.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.735 =================================================================================================================== 00:15:30.735 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:30.735 [2024-05-15 01:02:43.104090] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:30.735 01:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1258817 00:15:30.993 01:02:43 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1258669 00:15:30.993 01:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1258669 ']' 00:15:30.993 01:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1258669 00:15:30.993 01:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:30.993 01:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:30.993 01:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1258669 00:15:31.252 01:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:31.252 01:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:31.252 01:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1258669' 00:15:31.252 killing process with pid 1258669 00:15:31.252 01:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1258669 00:15:31.252 [2024-05-15 01:02:43.395229] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:31.252 [2024-05-15 01:02:43.395294] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:31.252 01:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1258669 00:15:31.511 01:02:43 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:15:31.511 01:02:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:31.511 01:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:31.511 01:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:31.511 01:02:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1260269 00:15:31.511 01:02:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:31.511 01:02:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1260269 00:15:31.511 01:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1260269 ']' 00:15:31.511 01:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.511 01:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:31.511 01:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.511 01:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:31.511 01:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:31.511 [2024-05-15 01:02:43.732106] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:15:31.511 [2024-05-15 01:02:43.732181] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.511 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.512 [2024-05-15 01:02:43.805694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.769 [2024-05-15 01:02:43.911364] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:31.769 [2024-05-15 01:02:43.911411] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:31.769 [2024-05-15 01:02:43.911439] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:31.769 [2024-05-15 01:02:43.911450] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:31.769 [2024-05-15 01:02:43.911459] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:31.769 [2024-05-15 01:02:43.911484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.769 01:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:31.769 01:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:31.769 01:02:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:31.769 01:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:31.769 01:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:31.769 01:02:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:31.769 01:02:44 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.Dq0oGnZBHQ 00:15:31.769 01:02:44 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Dq0oGnZBHQ 00:15:31.769 01:02:44 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:32.028 [2024-05-15 01:02:44.271096] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.028 01:02:44 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:32.286 01:02:44 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:32.544 [2024-05-15 01:02:44.756353] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:32.544 [2024-05-15 01:02:44.756485] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:32.544 [2024-05-15 01:02:44.756715] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:32.544 01:02:44 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:32.802 malloc0 00:15:32.802 01:02:45 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:33.060 01:02:45 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Dq0oGnZBHQ 00:15:33.319 [2024-05-15 01:02:45.534982] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:33.319 01:02:45 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1260432 00:15:33.319 01:02:45 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:33.319 01:02:45 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1260432 /var/tmp/bdevperf.sock 00:15:33.319 01:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1260432 ']' 00:15:33.319 01:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:33.319 01:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:33.319 01:02:45 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:33.319 01:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:33.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:33.319 01:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:33.319 01:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:33.319 [2024-05-15 01:02:45.599084] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:15:33.319 [2024-05-15 01:02:45.599155] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260432 ] 00:15:33.319 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.319 [2024-05-15 01:02:45.671780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.581 [2024-05-15 01:02:45.789674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.551 01:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:34.551 01:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:34.551 01:02:46 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Dq0oGnZBHQ 00:15:34.551 01:02:46 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:34.810 [2024-05-15 01:02:47.080665] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:34.810 nvme0n1 00:15:34.810 01:02:47 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:35.068 Running I/O for 1 seconds... 00:15:36.003 00:15:36.003 Latency(us) 00:15:36.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.003 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:36.003 Verification LBA range: start 0x0 length 0x2000 00:15:36.003 nvme0n1 : 1.07 760.40 2.97 0.00 0.00 162977.92 6747.78 121168.78 00:15:36.003 =================================================================================================================== 00:15:36.003 Total : 760.40 2.97 0.00 0.00 162977.92 6747.78 121168.78 00:15:36.003 0 00:15:36.003 01:02:48 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1260432 00:15:36.003 01:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1260432 ']' 00:15:36.003 01:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1260432 00:15:36.003 01:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:36.003 01:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:36.003 01:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1260432 00:15:36.261 01:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:36.261 01:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:36.261 01:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1260432' 00:15:36.261 killing process with pid 1260432 00:15:36.261 01:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1260432 00:15:36.261 Received shutdown signal, test time was about 1.000000 seconds 00:15:36.261 00:15:36.261 Latency(us) 00:15:36.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.261 =================================================================================================================== 00:15:36.261 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:36.261 01:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1260432 00:15:36.519 01:02:48 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1260269 00:15:36.519 01:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1260269 ']' 00:15:36.519 01:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1260269 00:15:36.519 01:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:36.519 01:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:36.519 01:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1260269 00:15:36.519 01:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:36.519 01:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:36.519 01:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1260269' 00:15:36.519 killing process with pid 1260269 00:15:36.519 01:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1260269 00:15:36.519 [2024-05-15 01:02:48.715954] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:36.519 [2024-05-15 01:02:48.716020] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:36.519 01:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1260269 00:15:36.778 01:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:15:36.778 01:02:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:36.778 01:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:36.778 01:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:36.778 01:02:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1260849 00:15:36.778 01:02:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:36.778 01:02:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1260849 00:15:36.778 01:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1260849 ']' 00:15:36.778 01:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.778 01:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:36.778 01:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.778 01:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:36.778 01:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:36.778 [2024-05-15 01:02:49.061783] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:15:36.778 [2024-05-15 01:02:49.061873] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.778 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.778 [2024-05-15 01:02:49.165163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.036 [2024-05-15 01:02:49.297885] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:37.036 [2024-05-15 01:02:49.297983] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:37.036 [2024-05-15 01:02:49.298023] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:37.036 [2024-05-15 01:02:49.298048] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:37.036 [2024-05-15 01:02:49.298067] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:37.036 [2024-05-15 01:02:49.298119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.036 01:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:37.037 01:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:37.037 01:02:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:37.037 01:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:37.037 01:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:37.295 01:02:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:37.295 01:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:15:37.295 01:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.295 01:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:37.295 [2024-05-15 01:02:49.451730] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:37.295 malloc0 00:15:37.295 [2024-05-15 01:02:49.483557] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:37.295 [2024-05-15 01:02:49.483662] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:37.295 [2024-05-15 01:02:49.483858] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:37.295 01:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.295 01:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1260984 00:15:37.295 01:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:37.295 01:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1260984 /var/tmp/bdevperf.sock 00:15:37.295 01:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1260984 ']' 00:15:37.295 01:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:37.295 01:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:37.295 01:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:37.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:37.295 01:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:37.295 01:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:37.295 [2024-05-15 01:02:49.549852] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:15:37.295 [2024-05-15 01:02:49.549926] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260984 ] 00:15:37.295 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.295 [2024-05-15 01:02:49.617979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.553 [2024-05-15 01:02:49.734483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.553 01:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:37.553 01:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:37.553 01:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Dq0oGnZBHQ 00:15:37.819 01:02:50 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:38.078 [2024-05-15 01:02:50.366132] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:38.078 nvme0n1 00:15:38.078 01:02:50 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:38.335 Running I/O for 1 seconds... 00:15:39.709 00:15:39.709 Latency(us) 00:15:39.709 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.709 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:39.709 Verification LBA range: start 0x0 length 0x2000 00:15:39.709 nvme0n1 : 1.09 1395.94 5.45 0.00 0.00 88873.89 6310.87 130489.46 00:15:39.709 =================================================================================================================== 00:15:39.709 Total : 1395.94 5.45 0.00 0.00 88873.89 6310.87 130489.46 00:15:39.709 0 00:15:39.709 01:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:15:39.709 01:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.709 01:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:39.709 01:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.709 01:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:15:39.709 "subsystems": [ 00:15:39.709 { 00:15:39.709 "subsystem": "keyring", 00:15:39.709 "config": [ 00:15:39.709 { 00:15:39.709 "method": "keyring_file_add_key", 00:15:39.709 "params": { 00:15:39.709 "name": "key0", 00:15:39.709 "path": "/tmp/tmp.Dq0oGnZBHQ" 00:15:39.709 } 00:15:39.709 } 00:15:39.709 ] 00:15:39.709 }, 00:15:39.709 { 00:15:39.709 "subsystem": "iobuf", 00:15:39.709 "config": [ 00:15:39.709 { 00:15:39.709 "method": "iobuf_set_options", 00:15:39.709 "params": { 00:15:39.709 "small_pool_count": 8192, 00:15:39.709 "large_pool_count": 1024, 00:15:39.709 "small_bufsize": 8192, 00:15:39.709 "large_bufsize": 135168 00:15:39.709 } 00:15:39.709 } 00:15:39.709 ] 00:15:39.709 }, 00:15:39.709 { 00:15:39.709 "subsystem": "sock", 00:15:39.709 "config": [ 00:15:39.709 { 00:15:39.709 "method": "sock_impl_set_options", 00:15:39.709 "params": { 00:15:39.709 "impl_name": "posix", 00:15:39.709 "recv_buf_size": 2097152, 00:15:39.709 "send_buf_size": 2097152, 00:15:39.709 "enable_recv_pipe": true, 00:15:39.709 "enable_quickack": false, 00:15:39.709 "enable_placement_id": 0, 00:15:39.709 "enable_zerocopy_send_server": true, 00:15:39.709 "enable_zerocopy_send_client": false, 00:15:39.709 "zerocopy_threshold": 0, 00:15:39.709 "tls_version": 0, 00:15:39.709 "enable_ktls": false 00:15:39.709 } 00:15:39.709 }, 00:15:39.709 { 00:15:39.709 "method": "sock_impl_set_options", 00:15:39.709 "params": { 00:15:39.709 "impl_name": "ssl", 00:15:39.709 "recv_buf_size": 4096, 00:15:39.709 "send_buf_size": 4096, 00:15:39.709 "enable_recv_pipe": true, 00:15:39.709 "enable_quickack": false, 00:15:39.709 "enable_placement_id": 0, 00:15:39.709 "enable_zerocopy_send_server": true, 00:15:39.709 "enable_zerocopy_send_client": false, 00:15:39.709 "zerocopy_threshold": 0, 00:15:39.709 "tls_version": 0, 00:15:39.709 "enable_ktls": false 00:15:39.709 } 00:15:39.709 } 00:15:39.709 ] 00:15:39.709 }, 00:15:39.709 { 00:15:39.709 "subsystem": "vmd", 00:15:39.709 "config": [] 00:15:39.709 }, 00:15:39.709 { 00:15:39.709 "subsystem": "accel", 00:15:39.709 "config": [ 00:15:39.709 { 00:15:39.709 "method": "accel_set_options", 00:15:39.710 "params": { 00:15:39.710 "small_cache_size": 128, 00:15:39.710 "large_cache_size": 16, 00:15:39.710 "task_count": 2048, 00:15:39.710 "sequence_count": 2048, 00:15:39.710 "buf_count": 2048 00:15:39.710 } 00:15:39.710 } 00:15:39.710 ] 00:15:39.710 }, 00:15:39.710 { 00:15:39.710 "subsystem": "bdev", 00:15:39.710 "config": [ 00:15:39.710 { 00:15:39.710 "method": "bdev_set_options", 00:15:39.710 "params": { 00:15:39.710 "bdev_io_pool_size": 65535, 00:15:39.710 "bdev_io_cache_size": 256, 00:15:39.710 "bdev_auto_examine": true, 00:15:39.710 "iobuf_small_cache_size": 128, 00:15:39.710 "iobuf_large_cache_size": 16 00:15:39.710 } 00:15:39.710 }, 00:15:39.710 { 00:15:39.710 "method": "bdev_raid_set_options", 00:15:39.710 "params": { 00:15:39.710 "process_window_size_kb": 1024 00:15:39.710 } 00:15:39.710 }, 00:15:39.710 { 00:15:39.710 "method": "bdev_iscsi_set_options", 00:15:39.710 "params": { 00:15:39.710 "timeout_sec": 30 00:15:39.710 } 00:15:39.710 }, 00:15:39.710 { 00:15:39.710 "method": "bdev_nvme_set_options", 00:15:39.710 "params": { 00:15:39.710 "action_on_timeout": "none", 00:15:39.710 "timeout_us": 0, 00:15:39.710 "timeout_admin_us": 0, 00:15:39.710 "keep_alive_timeout_ms": 10000, 00:15:39.710 "arbitration_burst": 0, 00:15:39.710 "low_priority_weight": 0, 00:15:39.710 "medium_priority_weight": 0, 00:15:39.710 "high_priority_weight": 0, 00:15:39.710 "nvme_adminq_poll_period_us": 10000, 00:15:39.710 "nvme_ioq_poll_period_us": 0, 00:15:39.710 "io_queue_requests": 0, 00:15:39.710 "delay_cmd_submit": true, 00:15:39.710 "transport_retry_count": 4, 00:15:39.710 "bdev_retry_count": 3, 00:15:39.710 "transport_ack_timeout": 0, 00:15:39.710 "ctrlr_loss_timeout_sec": 0, 00:15:39.710 "reconnect_delay_sec": 0, 00:15:39.710 "fast_io_fail_timeout_sec": 0, 00:15:39.710 "disable_auto_failback": false, 00:15:39.710 "generate_uuids": false, 00:15:39.710 "transport_tos": 0, 00:15:39.710 "nvme_error_stat": false, 00:15:39.710 "rdma_srq_size": 0, 00:15:39.710 "io_path_stat": false, 00:15:39.710 "allow_accel_sequence": false, 00:15:39.710 "rdma_max_cq_size": 0, 00:15:39.710 "rdma_cm_event_timeout_ms": 0, 00:15:39.710 "dhchap_digests": [ 00:15:39.710 "sha256", 00:15:39.710 "sha384", 00:15:39.710 "sha512" 00:15:39.710 ], 00:15:39.710 "dhchap_dhgroups": [ 00:15:39.710 "null", 00:15:39.710 "ffdhe2048", 00:15:39.710 "ffdhe3072", 00:15:39.710 "ffdhe4096", 00:15:39.710 "ffdhe6144", 00:15:39.710 "ffdhe8192" 00:15:39.710 ] 00:15:39.710 } 00:15:39.710 }, 00:15:39.710 { 00:15:39.710 "method": "bdev_nvme_set_hotplug", 00:15:39.710 "params": { 00:15:39.710 "period_us": 100000, 00:15:39.710 "enable": false 00:15:39.710 } 00:15:39.710 }, 00:15:39.710 { 00:15:39.710 "method": "bdev_malloc_create", 00:15:39.710 "params": { 00:15:39.710 "name": "malloc0", 00:15:39.710 "num_blocks": 8192, 00:15:39.710 "block_size": 4096, 00:15:39.710 "physical_block_size": 4096, 00:15:39.710 "uuid": "37008924-ea67-4fcb-b406-6af55373799b", 00:15:39.710 "optimal_io_boundary": 0 00:15:39.710 } 00:15:39.710 }, 00:15:39.710 { 00:15:39.710 "method": "bdev_wait_for_examine" 00:15:39.710 } 00:15:39.710 ] 00:15:39.710 }, 00:15:39.710 { 00:15:39.710 "subsystem": "nbd", 00:15:39.710 "config": [] 00:15:39.710 }, 00:15:39.710 { 00:15:39.710 "subsystem": "scheduler", 00:15:39.710 "config": [ 00:15:39.710 { 00:15:39.710 "method": "framework_set_scheduler", 00:15:39.710 "params": { 00:15:39.710 "name": "static" 00:15:39.710 } 00:15:39.710 } 00:15:39.710 ] 00:15:39.710 }, 00:15:39.710 { 00:15:39.710 "subsystem": "nvmf", 00:15:39.710 "config": [ 00:15:39.710 { 00:15:39.710 "method": "nvmf_set_config", 00:15:39.710 "params": { 00:15:39.710 "discovery_filter": "match_any", 00:15:39.710 "admin_cmd_passthru": { 00:15:39.710 "identify_ctrlr": false 00:15:39.710 } 00:15:39.710 } 00:15:39.710 }, 00:15:39.710 { 00:15:39.710 "method": "nvmf_set_max_subsystems", 00:15:39.710 "params": { 00:15:39.710 "max_subsystems": 1024 00:15:39.710 } 00:15:39.710 }, 00:15:39.710 { 00:15:39.710 "method": "nvmf_set_crdt", 00:15:39.710 "params": { 00:15:39.710 "crdt1": 0, 00:15:39.710 "crdt2": 0, 00:15:39.710 "crdt3": 0 00:15:39.710 } 00:15:39.710 }, 00:15:39.710 { 00:15:39.710 "method": "nvmf_create_transport", 00:15:39.710 "params": { 00:15:39.710 "trtype": "TCP", 00:15:39.710 "max_queue_depth": 128, 00:15:39.710 "max_io_qpairs_per_ctrlr": 127, 00:15:39.710 "in_capsule_data_size": 4096, 00:15:39.710 "max_io_size": 131072, 00:15:39.710 "io_unit_size": 131072, 00:15:39.710 "max_aq_depth": 128, 00:15:39.710 "num_shared_buffers": 511, 00:15:39.710 "buf_cache_size": 4294967295, 00:15:39.710 "dif_insert_or_strip": false, 00:15:39.710 "zcopy": false, 00:15:39.710 "c2h_success": false, 00:15:39.710 "sock_priority": 0, 00:15:39.710 "abort_timeout_sec": 1, 00:15:39.710 "ack_timeout": 0, 00:15:39.710 "data_wr_pool_size": 0 00:15:39.710 } 00:15:39.710 }, 00:15:39.710 { 00:15:39.710 "method": "nvmf_create_subsystem", 00:15:39.710 "params": { 00:15:39.710 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.710 "allow_any_host": false, 00:15:39.710 "serial_number": "00000000000000000000", 00:15:39.710 "model_number": "SPDK bdev Controller", 00:15:39.710 "max_namespaces": 32, 00:15:39.710 "min_cntlid": 1, 00:15:39.710 "max_cntlid": 65519, 00:15:39.710 "ana_reporting": false 00:15:39.710 } 00:15:39.710 }, 00:15:39.710 { 00:15:39.710 "method": "nvmf_subsystem_add_host", 00:15:39.710 "params": { 00:15:39.710 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.710 "host": "nqn.2016-06.io.spdk:host1", 00:15:39.710 "psk": "key0" 00:15:39.710 } 00:15:39.710 }, 00:15:39.710 { 00:15:39.710 "method": "nvmf_subsystem_add_ns", 00:15:39.710 "params": { 00:15:39.710 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.710 "namespace": { 00:15:39.710 "nsid": 1, 00:15:39.710 "bdev_name": "malloc0", 00:15:39.710 "nguid": "37008924EA674FCBB4066AF55373799B", 00:15:39.710 "uuid": "37008924-ea67-4fcb-b406-6af55373799b", 00:15:39.710 "no_auto_visible": false 00:15:39.710 } 00:15:39.710 } 00:15:39.710 }, 00:15:39.710 { 00:15:39.710 "method": "nvmf_subsystem_add_listener", 00:15:39.710 "params": { 00:15:39.710 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.710 "listen_address": { 00:15:39.710 "trtype": "TCP", 00:15:39.710 "adrfam": "IPv4", 00:15:39.710 "traddr": "10.0.0.2", 00:15:39.710 "trsvcid": "4420" 00:15:39.710 }, 00:15:39.710 "secure_channel": true 00:15:39.710 } 00:15:39.710 } 00:15:39.710 ] 00:15:39.710 } 00:15:39.710 ] 00:15:39.710 }' 00:15:39.710 01:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:39.968 01:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:15:39.968 "subsystems": [ 00:15:39.968 { 00:15:39.968 "subsystem": "keyring", 00:15:39.968 "config": [ 00:15:39.968 { 00:15:39.968 "method": "keyring_file_add_key", 00:15:39.968 "params": { 00:15:39.968 "name": "key0", 00:15:39.968 "path": "/tmp/tmp.Dq0oGnZBHQ" 00:15:39.968 } 00:15:39.968 } 00:15:39.968 ] 00:15:39.968 }, 00:15:39.968 { 00:15:39.968 "subsystem": "iobuf", 00:15:39.968 "config": [ 00:15:39.968 { 00:15:39.968 "method": "iobuf_set_options", 00:15:39.968 "params": { 00:15:39.968 "small_pool_count": 8192, 00:15:39.968 "large_pool_count": 1024, 00:15:39.968 "small_bufsize": 8192, 00:15:39.968 "large_bufsize": 135168 00:15:39.968 } 00:15:39.968 } 00:15:39.968 ] 00:15:39.968 }, 00:15:39.968 { 00:15:39.968 "subsystem": "sock", 00:15:39.968 "config": [ 00:15:39.968 { 00:15:39.968 "method": "sock_impl_set_options", 00:15:39.968 "params": { 00:15:39.968 "impl_name": "posix", 00:15:39.968 "recv_buf_size": 2097152, 00:15:39.968 "send_buf_size": 2097152, 00:15:39.968 "enable_recv_pipe": true, 00:15:39.968 "enable_quickack": false, 00:15:39.968 "enable_placement_id": 0, 00:15:39.968 "enable_zerocopy_send_server": true, 00:15:39.968 "enable_zerocopy_send_client": false, 00:15:39.968 "zerocopy_threshold": 0, 00:15:39.968 "tls_version": 0, 00:15:39.968 "enable_ktls": false 00:15:39.968 } 00:15:39.968 }, 00:15:39.968 { 00:15:39.968 "method": "sock_impl_set_options", 00:15:39.968 "params": { 00:15:39.968 "impl_name": "ssl", 00:15:39.968 "recv_buf_size": 4096, 00:15:39.968 "send_buf_size": 4096, 00:15:39.968 "enable_recv_pipe": true, 00:15:39.968 "enable_quickack": false, 00:15:39.968 "enable_placement_id": 0, 00:15:39.968 "enable_zerocopy_send_server": true, 00:15:39.968 "enable_zerocopy_send_client": false, 00:15:39.968 "zerocopy_threshold": 0, 00:15:39.968 "tls_version": 0, 00:15:39.968 "enable_ktls": false 00:15:39.968 } 00:15:39.968 } 00:15:39.968 ] 00:15:39.968 }, 00:15:39.968 { 00:15:39.968 "subsystem": "vmd", 00:15:39.968 "config": [] 00:15:39.968 }, 00:15:39.968 { 00:15:39.968 "subsystem": "accel", 00:15:39.968 "config": [ 00:15:39.968 { 00:15:39.968 "method": "accel_set_options", 00:15:39.968 "params": { 00:15:39.968 "small_cache_size": 128, 00:15:39.968 "large_cache_size": 16, 00:15:39.968 "task_count": 2048, 00:15:39.968 "sequence_count": 2048, 00:15:39.968 "buf_count": 2048 00:15:39.968 } 00:15:39.968 } 00:15:39.968 ] 00:15:39.969 }, 00:15:39.969 { 00:15:39.969 "subsystem": "bdev", 00:15:39.969 "config": [ 00:15:39.969 { 00:15:39.969 "method": "bdev_set_options", 00:15:39.969 "params": { 00:15:39.969 "bdev_io_pool_size": 65535, 00:15:39.969 "bdev_io_cache_size": 256, 00:15:39.969 "bdev_auto_examine": true, 00:15:39.969 "iobuf_small_cache_size": 128, 00:15:39.969 "iobuf_large_cache_size": 16 00:15:39.969 } 00:15:39.969 }, 00:15:39.969 { 00:15:39.969 "method": "bdev_raid_set_options", 00:15:39.969 "params": { 00:15:39.969 "process_window_size_kb": 1024 00:15:39.969 } 00:15:39.969 }, 00:15:39.969 { 00:15:39.969 "method": "bdev_iscsi_set_options", 00:15:39.969 "params": { 00:15:39.969 "timeout_sec": 30 00:15:39.969 } 00:15:39.969 }, 00:15:39.969 { 00:15:39.969 "method": "bdev_nvme_set_options", 00:15:39.969 "params": { 00:15:39.969 "action_on_timeout": "none", 00:15:39.969 "timeout_us": 0, 00:15:39.969 "timeout_admin_us": 0, 00:15:39.969 "keep_alive_timeout_ms": 10000, 00:15:39.969 "arbitration_burst": 0, 00:15:39.969 "low_priority_weight": 0, 00:15:39.969 "medium_priority_weight": 0, 00:15:39.969 "high_priority_weight": 0, 00:15:39.969 "nvme_adminq_poll_period_us": 10000, 00:15:39.969 "nvme_ioq_poll_period_us": 0, 00:15:39.969 "io_queue_requests": 512, 00:15:39.969 "delay_cmd_submit": true, 00:15:39.969 "transport_retry_count": 4, 00:15:39.969 "bdev_retry_count": 3, 00:15:39.969 "transport_ack_timeout": 0, 00:15:39.969 "ctrlr_loss_timeout_sec": 0, 00:15:39.969 "reconnect_delay_sec": 0, 00:15:39.969 "fast_io_fail_timeout_sec": 0, 00:15:39.969 "disable_auto_failback": false, 00:15:39.969 "generate_uuids": false, 00:15:39.969 "transport_tos": 0, 00:15:39.969 "nvme_error_stat": false, 00:15:39.969 "rdma_srq_size": 0, 00:15:39.969 "io_path_stat": false, 00:15:39.969 "allow_accel_sequence": false, 00:15:39.969 "rdma_max_cq_size": 0, 00:15:39.969 "rdma_cm_event_timeout_ms": 0, 00:15:39.969 "dhchap_digests": [ 00:15:39.969 "sha256", 00:15:39.969 "sha384", 00:15:39.969 "sha512" 00:15:39.969 ], 00:15:39.969 "dhchap_dhgroups": [ 00:15:39.969 "null", 00:15:39.969 "ffdhe2048", 00:15:39.969 "ffdhe3072", 00:15:39.969 "ffdhe4096", 00:15:39.969 "ffdhe6144", 00:15:39.969 "ffdhe8192" 00:15:39.969 ] 00:15:39.969 } 00:15:39.969 }, 00:15:39.969 { 00:15:39.969 "method": "bdev_nvme_attach_controller", 00:15:39.969 "params": { 00:15:39.969 "name": "nvme0", 00:15:39.969 "trtype": "TCP", 00:15:39.969 "adrfam": "IPv4", 00:15:39.969 "traddr": "10.0.0.2", 00:15:39.969 "trsvcid": "4420", 00:15:39.969 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.969 "prchk_reftag": false, 00:15:39.969 "prchk_guard": false, 00:15:39.969 "ctrlr_loss_timeout_sec": 0, 00:15:39.969 "reconnect_delay_sec": 0, 00:15:39.969 "fast_io_fail_timeout_sec": 0, 00:15:39.969 "psk": "key0", 00:15:39.969 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:39.969 "hdgst": false, 00:15:39.969 "ddgst": false 00:15:39.969 } 00:15:39.969 }, 00:15:39.969 { 00:15:39.969 "method": "bdev_nvme_set_hotplug", 00:15:39.969 "params": { 00:15:39.969 "period_us": 100000, 00:15:39.969 "enable": false 00:15:39.969 } 00:15:39.969 }, 00:15:39.969 { 00:15:39.969 "method": "bdev_enable_histogram", 00:15:39.969 "params": { 00:15:39.969 "name": "nvme0n1", 00:15:39.969 "enable": true 00:15:39.969 } 00:15:39.969 }, 00:15:39.969 { 00:15:39.969 "method": "bdev_wait_for_examine" 00:15:39.969 } 00:15:39.969 ] 00:15:39.969 }, 00:15:39.969 { 00:15:39.969 "subsystem": "nbd", 00:15:39.969 "config": [] 00:15:39.969 } 00:15:39.969 ] 00:15:39.969 }' 00:15:39.969 01:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1260984 00:15:39.969 01:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1260984 ']' 00:15:39.969 01:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1260984 00:15:39.969 01:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:39.969 01:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:39.969 01:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1260984 00:15:39.969 01:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:39.969 01:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:39.969 01:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1260984' 00:15:39.969 killing process with pid 1260984 00:15:39.969 01:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1260984 00:15:39.969 Received shutdown signal, test time was about 1.000000 seconds 00:15:39.969 00:15:39.969 Latency(us) 00:15:39.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.969 =================================================================================================================== 00:15:39.969 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:39.969 01:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1260984 00:15:40.227 01:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1260849 00:15:40.227 01:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1260849 ']' 00:15:40.227 01:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1260849 00:15:40.227 01:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:40.227 01:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:40.227 01:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1260849 00:15:40.227 01:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:40.227 01:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:40.227 01:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1260849' 00:15:40.227 killing process with pid 1260849 00:15:40.227 01:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1260849 00:15:40.227 [2024-05-15 01:02:52.477594] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:40.227 01:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1260849 00:15:40.485 01:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:15:40.485 01:02:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:40.485 01:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:15:40.485 "subsystems": [ 00:15:40.485 { 00:15:40.485 "subsystem": "keyring", 00:15:40.485 "config": [ 00:15:40.485 { 00:15:40.486 "method": "keyring_file_add_key", 00:15:40.486 "params": { 00:15:40.486 "name": "key0", 00:15:40.486 "path": "/tmp/tmp.Dq0oGnZBHQ" 00:15:40.486 } 00:15:40.486 } 00:15:40.486 ] 00:15:40.486 }, 00:15:40.486 { 00:15:40.486 "subsystem": "iobuf", 00:15:40.486 "config": [ 00:15:40.486 { 00:15:40.486 "method": "iobuf_set_options", 00:15:40.486 "params": { 00:15:40.486 "small_pool_count": 8192, 00:15:40.486 "large_pool_count": 1024, 00:15:40.486 "small_bufsize": 8192, 00:15:40.486 "large_bufsize": 135168 00:15:40.486 } 00:15:40.486 } 00:15:40.486 ] 00:15:40.486 }, 00:15:40.486 { 00:15:40.486 "subsystem": "sock", 00:15:40.486 "config": [ 00:15:40.486 { 00:15:40.486 "method": "sock_impl_set_options", 00:15:40.486 "params": { 00:15:40.486 "impl_name": "posix", 00:15:40.486 "recv_buf_size": 2097152, 00:15:40.486 "send_buf_size": 2097152, 00:15:40.486 "enable_recv_pipe": true, 00:15:40.486 "enable_quickack": false, 00:15:40.486 "enable_placement_id": 0, 00:15:40.486 "enable_zerocopy_send_server": true, 00:15:40.486 "enable_zerocopy_send_client": false, 00:15:40.486 "zerocopy_threshold": 0, 00:15:40.486 "tls_version": 0, 00:15:40.486 "enable_ktls": false 00:15:40.486 } 00:15:40.486 }, 00:15:40.486 { 00:15:40.486 "method": "sock_impl_set_options", 00:15:40.486 "params": { 00:15:40.486 "impl_name": "ssl", 00:15:40.486 "recv_buf_size": 4096, 00:15:40.486 "send_buf_size": 4096, 00:15:40.486 "enable_recv_pipe": true, 00:15:40.486 "enable_quickack": false, 00:15:40.486 "enable_placement_id": 0, 00:15:40.486 "enable_zerocopy_send_server": true, 00:15:40.486 "enable_zerocopy_send_client": false, 00:15:40.486 "zerocopy_threshold": 0, 00:15:40.486 "tls_version": 0, 00:15:40.486 "enable_ktls": false 00:15:40.486 } 00:15:40.486 } 00:15:40.486 ] 00:15:40.486 }, 00:15:40.486 { 00:15:40.486 "subsystem": "vmd", 00:15:40.486 "config": [] 00:15:40.486 }, 00:15:40.486 { 00:15:40.486 "subsystem": "accel", 00:15:40.486 "config": [ 00:15:40.486 { 00:15:40.486 "method": "accel_set_options", 00:15:40.486 "params": { 00:15:40.486 "small_cache_size": 128, 00:15:40.486 "large_cache_size": 16, 00:15:40.486 "task_count": 2048, 00:15:40.486 "sequence_count": 2048, 00:15:40.486 "buf_count": 2048 00:15:40.486 } 00:15:40.486 } 00:15:40.486 ] 00:15:40.486 }, 00:15:40.486 { 00:15:40.486 "subsystem": "bdev", 00:15:40.486 "config": [ 00:15:40.486 { 00:15:40.486 "method": "bdev_set_options", 00:15:40.486 "params": { 00:15:40.486 "bdev_io_pool_size": 65535, 00:15:40.486 "bdev_io_cache_size": 256, 00:15:40.486 "bdev_auto_examine": true, 00:15:40.486 "iobuf_small_cache_size": 128, 00:15:40.486 "iobuf_large_cache_size": 16 00:15:40.486 } 00:15:40.486 }, 00:15:40.486 { 00:15:40.486 "method": "bdev_raid_set_options", 00:15:40.486 "params": { 00:15:40.486 "process_window_size_kb": 1024 00:15:40.486 } 00:15:40.486 }, 00:15:40.486 { 00:15:40.486 "method": "bdev_iscsi_set_options", 00:15:40.486 "params": { 00:15:40.486 "timeout_sec": 30 00:15:40.486 } 00:15:40.486 }, 00:15:40.486 { 00:15:40.486 "method": "bdev_nvme_set_options", 00:15:40.486 "params": { 00:15:40.486 "action_on_timeout": "none", 00:15:40.486 "timeout_us": 0, 00:15:40.486 "timeout_admin_us": 0, 00:15:40.486 "keep_alive_timeout_ms": 10000, 00:15:40.486 "arbitration_burst": 0, 00:15:40.486 "low_priority_weight": 0, 00:15:40.486 "medium_priority_weight": 0, 00:15:40.486 "high_priority_weight": 0, 00:15:40.486 "nvme_adminq_poll_period_us": 10000, 00:15:40.486 "nvme_ioq_poll_period_us": 0, 00:15:40.486 "io_queue_requests": 0, 00:15:40.486 "delay_cmd_submit": true, 00:15:40.486 "transport_retry_count": 4, 00:15:40.486 "bdev_retry_count": 3, 00:15:40.486 "transport_ack_timeout": 0, 00:15:40.486 "ctrlr_loss_timeout_sec": 0, 00:15:40.486 "reconnect_delay_sec": 0, 00:15:40.486 "fast_io_fail_timeout_sec": 0, 00:15:40.486 "disable_auto_failback": false, 00:15:40.486 "generate_uuids": false, 00:15:40.486 "transport_tos": 0, 00:15:40.486 "nvme_error_stat": false, 00:15:40.486 "rdma_srq_size": 0, 00:15:40.486 "io_path_stat": false, 00:15:40.486 "allow_accel_sequence": false, 00:15:40.486 "rdma_max_cq_size": 0, 00:15:40.486 "rdma_cm_event_timeout_ms": 0, 00:15:40.486 "dhchap_digests": [ 00:15:40.486 "sha256", 00:15:40.486 "sha384", 00:15:40.486 "sha512" 00:15:40.486 ], 00:15:40.486 "dhchap_dhgroups": [ 00:15:40.486 "null", 00:15:40.486 "ffdhe2048", 00:15:40.486 "ffdhe3072", 00:15:40.486 "ffdhe4096", 00:15:40.486 "ffdhe6144", 00:15:40.486 "ffdhe8192" 00:15:40.486 ] 00:15:40.486 } 00:15:40.486 }, 00:15:40.486 { 00:15:40.486 "method": "bdev_nvme_set_hotplug", 00:15:40.486 "params": { 00:15:40.486 "period_us": 100000, 00:15:40.486 "enable": false 00:15:40.486 } 00:15:40.486 }, 00:15:40.486 { 00:15:40.486 "method": "bdev_malloc_create", 00:15:40.486 "params": { 00:15:40.486 "name": "malloc0", 00:15:40.486 "num_blocks": 8192, 00:15:40.486 "block_size": 4096, 00:15:40.486 "physical_block_size": 4096, 00:15:40.486 "uuid": "37008924-ea67-4fcb-b406-6af55373799b", 00:15:40.486 "optimal_io_boundary": 0 00:15:40.486 } 00:15:40.486 }, 00:15:40.486 { 00:15:40.486 "method": "bdev_wait_for_examine" 00:15:40.486 } 00:15:40.486 ] 00:15:40.486 }, 00:15:40.486 { 00:15:40.486 "subsystem": "nbd", 00:15:40.486 "config": [] 00:15:40.486 }, 00:15:40.486 { 00:15:40.486 "subsystem": "scheduler", 00:15:40.486 "config": [ 00:15:40.486 { 00:15:40.486 "method": "framework_set_scheduler", 00:15:40.486 "params": { 00:15:40.486 "name": "static" 00:15:40.486 } 00:15:40.486 } 00:15:40.486 ] 00:15:40.486 }, 00:15:40.486 { 00:15:40.486 "subsystem": "nvmf", 00:15:40.486 "config": [ 00:15:40.486 { 00:15:40.486 "method": "nvmf_set_config", 00:15:40.486 "params": { 00:15:40.486 "discovery_filter": "match_any", 00:15:40.486 "admin_cmd_passthru": { 00:15:40.486 "identify_ctrlr": false 00:15:40.486 } 00:15:40.486 } 00:15:40.486 }, 00:15:40.486 { 00:15:40.486 "method": "nvmf_set_max_subsystems", 00:15:40.486 "params": { 00:15:40.486 "max_subsystems": 1024 00:15:40.486 } 00:15:40.486 }, 00:15:40.486 { 00:15:40.486 "method": "nvmf_set_crdt", 00:15:40.486 "params": { 00:15:40.486 "crdt1": 0, 00:15:40.486 "crdt2": 0, 00:15:40.486 "crdt3": 0 00:15:40.486 } 00:15:40.486 }, 00:15:40.486 { 00:15:40.486 "method": "nvmf_create_transport", 00:15:40.486 "params": { 00:15:40.486 "trtype": "TCP", 00:15:40.486 "max_queue_depth": 128, 00:15:40.486 "max_io_qpairs_per_ctrlr": 127, 00:15:40.486 "in_capsule_data_size": 4096, 00:15:40.486 "max_io_size": 131072, 00:15:40.486 "io_unit_size": 131072, 00:15:40.486 "max_aq_depth": 128, 00:15:40.486 "num_shared_buffers": 511, 00:15:40.486 "buf_cache_size": 4294967295, 00:15:40.486 "dif_insert_or_strip": false, 00:15:40.486 "zcopy": false, 00:15:40.486 "c2h_success": false, 00:15:40.486 "sock_priority": 0, 00:15:40.486 "abort_timeout_sec": 1, 00:15:40.486 "ack_timeout": 0, 00:15:40.486 "data_wr_pool_size": 0 00:15:40.486 } 00:15:40.486 }, 00:15:40.486 { 00:15:40.486 "method": "nvmf_create_subsystem", 00:15:40.486 "params": { 00:15:40.486 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:40.486 "allow_any_host": false, 00:15:40.486 "serial_number": "00000000000000000000", 00:15:40.486 "model_number": "SPDK bdev Controller", 00:15:40.486 "max_namespaces": 32, 00:15:40.486 "min_cntlid": 1, 00:15:40.486 "max_cntlid": 65519, 00:15:40.486 "ana_reporting": false 00:15:40.486 } 00:15:40.486 }, 00:15:40.486 { 00:15:40.486 "method": "nvmf_subsystem_add_host", 00:15:40.486 "params": { 00:15:40.486 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:40.486 "host": "nqn.2016-06.io.spdk:host1", 00:15:40.486 "psk": "key0" 00:15:40.486 } 00:15:40.486 }, 00:15:40.486 { 00:15:40.486 "method": "nvmf_subsystem_add_ns", 00:15:40.486 "params": { 00:15:40.486 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:40.486 "namespace": { 00:15:40.486 "nsid": 1, 00:15:40.486 "bdev_name": "malloc0", 00:15:40.486 "nguid": "37008924EA674FCBB4066AF55373799B", 00:15:40.486 "uuid": "37008924-ea67-4fcb-b406-6af55373799b", 00:15:40.486 "no_auto_visible": false 00:15:40.486 } 00:15:40.486 } 00:15:40.486 }, 00:15:40.486 { 00:15:40.486 "method": "nvmf_subsystem_add_listener", 00:15:40.486 "params": { 00:15:40.486 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:40.486 "listen_address": { 00:15:40.486 "trtype": "TCP", 00:15:40.486 "adrfam": "IPv4", 00:15:40.486 "traddr": "10.0.0.2", 00:15:40.486 "trsvcid": "4420" 00:15:40.486 }, 00:15:40.486 "secure_channel": true 00:15:40.486 } 00:15:40.486 } 00:15:40.486 ] 00:15:40.486 } 00:15:40.486 ] 00:15:40.486 }' 00:15:40.487 01:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:40.487 01:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:40.487 01:02:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1261400 00:15:40.487 01:02:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:40.487 01:02:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1261400 00:15:40.487 01:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1261400 ']' 00:15:40.487 01:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.487 01:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:40.487 01:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.487 01:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:40.487 01:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:40.487 [2024-05-15 01:02:52.824407] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:15:40.487 [2024-05-15 01:02:52.824500] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.487 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.745 [2024-05-15 01:02:52.905089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.745 [2024-05-15 01:02:53.017701] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.745 [2024-05-15 01:02:53.017772] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.745 [2024-05-15 01:02:53.017789] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:40.745 [2024-05-15 01:02:53.017802] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:40.745 [2024-05-15 01:02:53.017814] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.745 [2024-05-15 01:02:53.017907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.004 [2024-05-15 01:02:53.255785] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:41.004 [2024-05-15 01:02:53.287756] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:41.004 [2024-05-15 01:02:53.287834] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:41.004 [2024-05-15 01:02:53.296119] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:41.572 01:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:41.572 01:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:41.572 01:02:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:41.572 01:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:41.572 01:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:41.572 01:02:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:41.572 01:02:53 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1261552 00:15:41.572 01:02:53 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1261552 /var/tmp/bdevperf.sock 00:15:41.572 01:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 1261552 ']' 00:15:41.572 01:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:41.572 01:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:41.572 01:02:53 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:41.572 01:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:41.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:41.572 01:02:53 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:15:41.572 "subsystems": [ 00:15:41.572 { 00:15:41.572 "subsystem": "keyring", 00:15:41.572 "config": [ 00:15:41.572 { 00:15:41.572 "method": "keyring_file_add_key", 00:15:41.572 "params": { 00:15:41.572 "name": "key0", 00:15:41.572 "path": "/tmp/tmp.Dq0oGnZBHQ" 00:15:41.572 } 00:15:41.572 } 00:15:41.572 ] 00:15:41.572 }, 00:15:41.572 { 00:15:41.572 "subsystem": "iobuf", 00:15:41.572 "config": [ 00:15:41.572 { 00:15:41.572 "method": "iobuf_set_options", 00:15:41.572 "params": { 00:15:41.572 "small_pool_count": 8192, 00:15:41.572 "large_pool_count": 1024, 00:15:41.572 "small_bufsize": 8192, 00:15:41.572 "large_bufsize": 135168 00:15:41.572 } 00:15:41.572 } 00:15:41.572 ] 00:15:41.572 }, 00:15:41.572 { 00:15:41.572 "subsystem": "sock", 00:15:41.572 "config": [ 00:15:41.572 { 00:15:41.572 "method": "sock_impl_set_options", 00:15:41.572 "params": { 00:15:41.572 "impl_name": "posix", 00:15:41.572 "recv_buf_size": 2097152, 00:15:41.572 "send_buf_size": 2097152, 00:15:41.572 "enable_recv_pipe": true, 00:15:41.572 "enable_quickack": false, 00:15:41.572 "enable_placement_id": 0, 00:15:41.572 "enable_zerocopy_send_server": true, 00:15:41.572 "enable_zerocopy_send_client": false, 00:15:41.572 "zerocopy_threshold": 0, 00:15:41.572 "tls_version": 0, 00:15:41.572 "enable_ktls": false 00:15:41.572 } 00:15:41.572 }, 00:15:41.572 { 00:15:41.572 "method": "sock_impl_set_options", 00:15:41.572 "params": { 00:15:41.572 "impl_name": "ssl", 00:15:41.572 "recv_buf_size": 4096, 00:15:41.572 "send_buf_size": 4096, 00:15:41.572 "enable_recv_pipe": true, 00:15:41.572 "enable_quickack": false, 00:15:41.572 "enable_placement_id": 0, 00:15:41.572 "enable_zerocopy_send_server": true, 00:15:41.572 "enable_zerocopy_send_client": false, 00:15:41.572 "zerocopy_threshold": 0, 00:15:41.572 "tls_version": 0, 00:15:41.572 "enable_ktls": false 00:15:41.572 } 00:15:41.572 } 00:15:41.572 ] 00:15:41.572 }, 00:15:41.572 { 00:15:41.572 "subsystem": "vmd", 00:15:41.572 "config": [] 00:15:41.572 }, 00:15:41.572 { 00:15:41.572 "subsystem": "accel", 00:15:41.572 "config": [ 00:15:41.572 { 00:15:41.572 "method": "accel_set_options", 00:15:41.572 "params": { 00:15:41.572 "small_cache_size": 128, 00:15:41.572 "large_cache_size": 16, 00:15:41.572 "task_count": 2048, 00:15:41.572 "sequence_count": 2048, 00:15:41.572 "buf_count": 2048 00:15:41.572 } 00:15:41.572 } 00:15:41.572 ] 00:15:41.572 }, 00:15:41.572 { 00:15:41.572 "subsystem": "bdev", 00:15:41.572 "config": [ 00:15:41.572 { 00:15:41.572 "method": "bdev_set_options", 00:15:41.572 "params": { 00:15:41.572 "bdev_io_pool_size": 65535, 00:15:41.572 "bdev_io_cache_size": 256, 00:15:41.572 "bdev_auto_examine": true, 00:15:41.572 "iobuf_small_cache_size": 128, 00:15:41.572 "iobuf_large_cache_size": 16 00:15:41.572 } 00:15:41.572 }, 00:15:41.572 { 00:15:41.572 "method": "bdev_raid_set_options", 00:15:41.572 "params": { 00:15:41.572 "process_window_size_kb": 1024 00:15:41.572 } 00:15:41.572 }, 00:15:41.572 { 00:15:41.572 "method": "bdev_iscsi_set_options", 00:15:41.572 "params": { 00:15:41.572 "timeout_sec": 30 00:15:41.572 } 00:15:41.572 }, 00:15:41.572 { 00:15:41.572 "method": "bdev_nvme_set_options", 00:15:41.572 "params": { 00:15:41.572 "action_on_timeout": "none", 00:15:41.572 "timeout_us": 0, 00:15:41.572 "timeout_admin_us": 0, 00:15:41.572 "keep_alive_timeout_ms": 10000, 00:15:41.572 "arbitration_burst": 0, 00:15:41.572 "low_priority_weight": 0, 00:15:41.572 "medium_priority_weight": 0, 00:15:41.572 "high_priority_weight": 0, 00:15:41.572 "nvme_adminq_poll_period_us": 10000, 00:15:41.572 "nvme_ioq_poll_period_us": 0, 00:15:41.572 "io_queue_requests": 512, 00:15:41.572 "delay_cmd_submit": true, 00:15:41.572 "transport_retry_count": 4, 00:15:41.572 "bdev_retry_count": 3, 00:15:41.572 "transport_ack_timeout": 0, 00:15:41.572 "ctrlr_loss_timeout_sec": 0, 00:15:41.572 "reconnect_delay_sec": 0, 00:15:41.572 "fast_io_fail_timeout_sec": 0, 00:15:41.572 "disable_auto_failback": false, 00:15:41.572 "generate_uuids": false, 00:15:41.572 "transport_tos": 0, 00:15:41.572 "nvme_error_stat": false, 00:15:41.572 "rdma_srq_size": 0, 00:15:41.572 "io_path_stat": false, 00:15:41.572 "allow_accel_sequence": false, 00:15:41.573 "rdma_max_cq_size": 0, 00:15:41.573 "rdma_cm_event_timeout_ms": 0, 00:15:41.573 "dhchap_digests": [ 00:15:41.573 "sha256", 00:15:41.573 "sha384", 00:15:41.573 "sha512" 00:15:41.573 ], 00:15:41.573 "dhchap_dhgroups": [ 00:15:41.573 "null", 00:15:41.573 "ffdhe2048", 00:15:41.573 "ffdhe3072", 00:15:41.573 "ffdhe4096", 00:15:41.573 "ffdhe6144", 00:15:41.573 "ffdhe8192" 00:15:41.573 ] 00:15:41.573 } 00:15:41.573 }, 00:15:41.573 { 00:15:41.573 "method": "bdev_nvme_attach_controller", 00:15:41.573 "params": { 00:15:41.573 "name": "nvme0", 00:15:41.573 "trtype": "TCP", 00:15:41.573 "adrfam": "IPv4", 00:15:41.573 "traddr": "10.0.0.2", 00:15:41.573 "trsvcid": "4420", 00:15:41.573 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.573 "prchk_reftag": false, 00:15:41.573 "prchk_guard": false, 00:15:41.573 "ctrlr_loss_timeout_sec": 0, 00:15:41.573 "reconnect_delay_sec": 0, 00:15:41.573 "fast_io_fail_timeout_sec": 0, 00:15:41.573 "psk": "key0", 00:15:41.573 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:41.573 "hdgst": false, 00:15:41.573 "ddgst": false 00:15:41.573 } 00:15:41.573 }, 00:15:41.573 { 00:15:41.573 "method": "bdev_nvme_set_hotplug", 00:15:41.573 "params": { 00:15:41.573 "period_us": 100000, 00:15:41.573 "enable": false 00:15:41.573 } 00:15:41.573 }, 00:15:41.573 { 00:15:41.573 "method": "bdev_enable_histogram", 00:15:41.573 "params": { 00:15:41.573 "name": "nvme0n1", 00:15:41.573 "enable": true 00:15:41.573 } 00:15:41.573 }, 00:15:41.573 { 00:15:41.573 "method": "bdev_wait_for_examine" 00:15:41.573 } 00:15:41.573 ] 00:15:41.573 }, 00:15:41.573 { 00:15:41.573 "subsystem": "nbd", 00:15:41.573 "config": [] 00:15:41.573 } 00:15:41.573 ] 00:15:41.573 }' 00:15:41.573 01:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:41.573 01:02:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:41.573 [2024-05-15 01:02:53.819190] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:15:41.573 [2024-05-15 01:02:53.819295] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1261552 ] 00:15:41.573 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.573 [2024-05-15 01:02:53.891185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.831 [2024-05-15 01:02:54.007539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.831 [2024-05-15 01:02:54.185732] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:42.398 01:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:42.398 01:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:42.398 01:02:54 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:42.398 01:02:54 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:15:42.656 01:02:55 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.656 01:02:55 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:42.913 Running I/O for 1 seconds... 00:15:43.847 00:15:43.847 Latency(us) 00:15:43.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:43.847 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:43.847 Verification LBA range: start 0x0 length 0x2000 00:15:43.847 nvme0n1 : 1.07 1373.66 5.37 0.00 0.00 90667.76 6553.60 132042.90 00:15:43.847 =================================================================================================================== 00:15:43.847 Total : 1373.66 5.37 0.00 0.00 90667.76 6553.60 132042.90 00:15:43.847 0 00:15:43.847 01:02:56 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:15:43.847 01:02:56 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:15:43.847 01:02:56 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:43.847 01:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:15:43.847 01:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:15:43.847 01:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:15:43.847 01:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:43.847 01:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:15:43.847 01:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:15:43.847 01:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:15:43.847 01:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:43.847 nvmf_trace.0 00:15:44.107 01:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:15:44.107 01:02:56 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1261552 00:15:44.107 01:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1261552 ']' 00:15:44.107 01:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1261552 00:15:44.107 01:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:44.107 01:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:44.107 01:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1261552 00:15:44.107 01:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:44.107 01:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:44.107 01:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1261552' 00:15:44.107 killing process with pid 1261552 00:15:44.107 01:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1261552 00:15:44.107 Received shutdown signal, test time was about 1.000000 seconds 00:15:44.107 00:15:44.107 Latency(us) 00:15:44.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:44.107 =================================================================================================================== 00:15:44.107 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:44.107 01:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1261552 00:15:44.366 01:02:56 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:44.366 01:02:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:44.366 01:02:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:15:44.366 01:02:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:44.366 01:02:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:15:44.366 01:02:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:44.366 01:02:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:44.366 rmmod nvme_tcp 00:15:44.366 rmmod nvme_fabrics 00:15:44.366 rmmod nvme_keyring 00:15:44.366 01:02:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:44.366 01:02:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:15:44.366 01:02:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:15:44.366 01:02:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1261400 ']' 00:15:44.366 01:02:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1261400 00:15:44.366 01:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 1261400 ']' 00:15:44.366 01:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 1261400 00:15:44.366 01:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:44.366 01:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:44.366 01:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1261400 00:15:44.366 01:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:44.366 01:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:44.366 01:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1261400' 00:15:44.366 killing process with pid 1261400 00:15:44.366 01:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 1261400 00:15:44.366 [2024-05-15 01:02:56.644708] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:44.366 01:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 1261400 00:15:44.627 01:02:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:44.627 01:02:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:44.627 01:02:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:44.627 01:02:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:44.627 01:02:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:44.627 01:02:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.627 01:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:44.627 01:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.165 01:02:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:47.165 01:02:58 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Bu4qD7vXNp /tmp/tmp.PdrcTkQHfC /tmp/tmp.Dq0oGnZBHQ 00:15:47.165 00:15:47.165 real 1m24.737s 00:15:47.165 user 2m14.204s 00:15:47.165 sys 0m28.405s 00:15:47.165 01:02:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:47.165 01:02:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:47.165 ************************************ 00:15:47.165 END TEST nvmf_tls 00:15:47.165 ************************************ 00:15:47.165 01:02:58 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:47.165 01:02:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:47.165 01:02:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:47.165 01:02:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:47.165 ************************************ 00:15:47.165 START TEST nvmf_fips 00:15:47.165 ************************************ 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:47.165 * Looking for test storage... 00:15:47.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:15:47.165 01:02:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:15:47.166 Error setting digest 00:15:47.166 00D229DB6C7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:15:47.166 00D229DB6C7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:15:47.166 01:02:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:49.706 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:49.706 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:49.706 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:49.706 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:49.706 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:49.707 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:49.707 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:49.707 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:49.707 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:49.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:49.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:15:49.707 00:15:49.707 --- 10.0.0.2 ping statistics --- 00:15:49.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.707 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:15:49.707 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:49.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:49.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:15:49.707 00:15:49.707 --- 10.0.0.1 ping statistics --- 00:15:49.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.707 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:15:49.707 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:49.707 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:15:49.707 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:49.707 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:49.707 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:49.707 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:49.707 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:49.707 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:49.707 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:49.707 01:03:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:15:49.707 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:49.707 01:03:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:49.707 01:03:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:49.707 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1264290 00:15:49.707 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:49.707 01:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1264290 00:15:49.707 01:03:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 1264290 ']' 00:15:49.707 01:03:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.707 01:03:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:49.707 01:03:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.707 01:03:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:49.707 01:03:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:49.707 [2024-05-15 01:03:01.749253] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:15:49.707 [2024-05-15 01:03:01.749347] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.707 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.707 [2024-05-15 01:03:01.824490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.707 [2024-05-15 01:03:01.937235] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.707 [2024-05-15 01:03:01.937309] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.707 [2024-05-15 01:03:01.937325] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:49.707 [2024-05-15 01:03:01.937339] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:49.707 [2024-05-15 01:03:01.937351] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.707 [2024-05-15 01:03:01.937380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.308 01:03:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:50.308 01:03:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:15:50.308 01:03:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:50.308 01:03:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:50.308 01:03:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:50.308 01:03:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:50.308 01:03:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:15:50.308 01:03:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:50.308 01:03:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:50.308 01:03:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:50.308 01:03:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:50.308 01:03:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:50.308 01:03:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:50.308 01:03:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:50.566 [2024-05-15 01:03:02.946143] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:50.825 [2024-05-15 01:03:02.962102] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:50.825 [2024-05-15 01:03:02.962175] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:50.825 [2024-05-15 01:03:02.962385] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:50.825 [2024-05-15 01:03:02.993887] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:50.825 malloc0 00:15:50.825 01:03:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:50.825 01:03:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1264473 00:15:50.825 01:03:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1264473 /var/tmp/bdevperf.sock 00:15:50.825 01:03:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 1264473 ']' 00:15:50.825 01:03:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:50.825 01:03:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:50.825 01:03:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:50.825 01:03:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:50.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:50.825 01:03:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:50.825 01:03:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:50.825 [2024-05-15 01:03:03.086197] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:15:50.825 [2024-05-15 01:03:03.086300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264473 ] 00:15:50.825 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.825 [2024-05-15 01:03:03.155961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.083 [2024-05-15 01:03:03.265523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.648 01:03:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:51.648 01:03:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:15:51.648 01:03:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:51.906 [2024-05-15 01:03:04.276818] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:51.906 [2024-05-15 01:03:04.276978] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:52.164 TLSTESTn1 00:15:52.164 01:03:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:52.164 Running I/O for 10 seconds... 00:16:02.196 00:16:02.196 Latency(us) 00:16:02.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.196 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:02.196 Verification LBA range: start 0x0 length 0x2000 00:16:02.196 TLSTESTn1 : 10.09 1319.12 5.15 0.00 0.00 96683.87 6116.69 126605.84 00:16:02.196 =================================================================================================================== 00:16:02.196 Total : 1319.12 5.15 0.00 0.00 96683.87 6116.69 126605.84 00:16:02.196 0 00:16:02.454 01:03:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:16:02.454 01:03:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:16:02.454 01:03:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:16:02.454 01:03:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:16:02.454 01:03:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:16:02.454 01:03:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:02.454 01:03:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:16:02.454 01:03:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:16:02.454 01:03:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:16:02.454 01:03:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:02.454 nvmf_trace.0 00:16:02.454 01:03:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:16:02.454 01:03:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1264473 00:16:02.454 01:03:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 1264473 ']' 00:16:02.454 01:03:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 1264473 00:16:02.454 01:03:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:16:02.454 01:03:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:02.454 01:03:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1264473 00:16:02.454 01:03:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:16:02.454 01:03:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:16:02.454 01:03:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1264473' 00:16:02.454 killing process with pid 1264473 00:16:02.454 01:03:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 1264473 00:16:02.454 Received shutdown signal, test time was about 10.000000 seconds 00:16:02.454 00:16:02.454 Latency(us) 00:16:02.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.454 =================================================================================================================== 00:16:02.454 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:02.454 [2024-05-15 01:03:14.697363] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:02.454 01:03:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 1264473 00:16:02.712 01:03:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:16:02.712 01:03:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:02.712 01:03:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:16:02.712 01:03:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:02.712 01:03:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:16:02.712 01:03:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:02.712 01:03:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:02.712 rmmod nvme_tcp 00:16:02.712 rmmod nvme_fabrics 00:16:02.712 rmmod nvme_keyring 00:16:02.712 01:03:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:02.712 01:03:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:16:02.712 01:03:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:16:02.712 01:03:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1264290 ']' 00:16:02.712 01:03:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1264290 00:16:02.712 01:03:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 1264290 ']' 00:16:02.712 01:03:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 1264290 00:16:02.712 01:03:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:16:02.712 01:03:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:02.712 01:03:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1264290 00:16:02.712 01:03:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:02.712 01:03:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:02.712 01:03:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1264290' 00:16:02.712 killing process with pid 1264290 00:16:02.712 01:03:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 1264290 00:16:02.712 [2024-05-15 01:03:15.046105] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:02.712 [2024-05-15 01:03:15.046152] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:02.712 01:03:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 1264290 00:16:02.971 01:03:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:02.971 01:03:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:02.971 01:03:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:02.971 01:03:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:02.971 01:03:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:02.971 01:03:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.971 01:03:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:02.971 01:03:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.507 01:03:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:05.507 01:03:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:16:05.507 00:16:05.507 real 0m18.359s 00:16:05.507 user 0m23.291s 00:16:05.507 sys 0m6.667s 00:16:05.507 01:03:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:05.507 01:03:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:05.507 ************************************ 00:16:05.507 END TEST nvmf_fips 00:16:05.507 ************************************ 00:16:05.507 01:03:17 nvmf_tcp -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:16:05.507 01:03:17 nvmf_tcp -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:16:05.507 01:03:17 nvmf_tcp -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:16:05.507 01:03:17 nvmf_tcp -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:16:05.507 01:03:17 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:16:05.507 01:03:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:08.044 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:08.044 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:08.044 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:08.044 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:16:08.044 01:03:19 nvmf_tcp -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:16:08.044 01:03:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:08.044 01:03:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:08.044 01:03:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:08.044 ************************************ 00:16:08.044 START TEST nvmf_perf_adq 00:16:08.044 ************************************ 00:16:08.044 01:03:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:16:08.044 * Looking for test storage... 00:16:08.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:08.044 01:03:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:08.044 01:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:16:08.044 01:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.044 01:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.044 01:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.044 01:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.044 01:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.044 01:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.044 01:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.044 01:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.044 01:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.044 01:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.044 01:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.044 01:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.044 01:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.044 01:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.044 01:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:08.044 01:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:08.044 01:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:08.044 01:03:20 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.044 01:03:20 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.044 01:03:20 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.044 01:03:20 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.045 01:03:20 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.045 01:03:20 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.045 01:03:20 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:16:08.045 01:03:20 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.045 01:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:16:08.045 01:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:08.045 01:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:08.045 01:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:08.045 01:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.045 01:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.045 01:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:08.045 01:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:08.045 01:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:08.045 01:03:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:16:08.045 01:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:16:08.045 01:03:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:10.578 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:10.578 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:10.578 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:10.578 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:16:10.578 01:03:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:16:10.837 01:03:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:16:12.736 01:03:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:18.056 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:18.056 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:18.056 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:18.056 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:18.057 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:18.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:18.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:16:18.057 00:16:18.057 --- 10.0.0.2 ping statistics --- 00:16:18.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.057 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:18.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:18.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:16:18.057 00:16:18.057 --- 10.0.0.1 ping statistics --- 00:16:18.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.057 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1271437 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1271437 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 1271437 ']' 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:18.057 01:03:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:18.057 [2024-05-15 01:03:29.870886] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:16:18.057 [2024-05-15 01:03:29.870981] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.057 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.057 [2024-05-15 01:03:29.950224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:18.057 [2024-05-15 01:03:30.071991] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:18.057 [2024-05-15 01:03:30.072045] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:18.057 [2024-05-15 01:03:30.072074] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:18.057 [2024-05-15 01:03:30.072086] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:18.057 [2024-05-15 01:03:30.072095] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:18.057 [2024-05-15 01:03:30.072156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.057 [2024-05-15 01:03:30.075962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.057 [2024-05-15 01:03:30.075988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:18.057 [2024-05-15 01:03:30.075991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:18.057 [2024-05-15 01:03:30.289001] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:18.057 Malloc1 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:18.057 [2024-05-15 01:03:30.342168] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:18.057 [2024-05-15 01:03:30.342500] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.057 01:03:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1271589 00:16:18.058 01:03:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:16:18.058 01:03:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:18.058 EAL: No free 2048 kB hugepages reported on node 1 00:16:20.587 01:03:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:16:20.587 01:03:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.587 01:03:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:20.587 01:03:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.587 01:03:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:16:20.588 "tick_rate": 2700000000, 00:16:20.588 "poll_groups": [ 00:16:20.588 { 00:16:20.588 "name": "nvmf_tgt_poll_group_000", 00:16:20.588 "admin_qpairs": 1, 00:16:20.588 "io_qpairs": 1, 00:16:20.588 "current_admin_qpairs": 1, 00:16:20.588 "current_io_qpairs": 1, 00:16:20.588 "pending_bdev_io": 0, 00:16:20.588 "completed_nvme_io": 15040, 00:16:20.588 "transports": [ 00:16:20.588 { 00:16:20.588 "trtype": "TCP" 00:16:20.588 } 00:16:20.588 ] 00:16:20.588 }, 00:16:20.588 { 00:16:20.588 "name": "nvmf_tgt_poll_group_001", 00:16:20.588 "admin_qpairs": 0, 00:16:20.588 "io_qpairs": 1, 00:16:20.588 "current_admin_qpairs": 0, 00:16:20.588 "current_io_qpairs": 1, 00:16:20.588 "pending_bdev_io": 0, 00:16:20.588 "completed_nvme_io": 20164, 00:16:20.588 "transports": [ 00:16:20.588 { 00:16:20.588 "trtype": "TCP" 00:16:20.588 } 00:16:20.588 ] 00:16:20.588 }, 00:16:20.588 { 00:16:20.588 "name": "nvmf_tgt_poll_group_002", 00:16:20.588 "admin_qpairs": 0, 00:16:20.588 "io_qpairs": 1, 00:16:20.588 "current_admin_qpairs": 0, 00:16:20.588 "current_io_qpairs": 1, 00:16:20.588 "pending_bdev_io": 0, 00:16:20.588 "completed_nvme_io": 20536, 00:16:20.588 "transports": [ 00:16:20.588 { 00:16:20.588 "trtype": "TCP" 00:16:20.588 } 00:16:20.588 ] 00:16:20.588 }, 00:16:20.588 { 00:16:20.588 "name": "nvmf_tgt_poll_group_003", 00:16:20.588 "admin_qpairs": 0, 00:16:20.588 "io_qpairs": 1, 00:16:20.588 "current_admin_qpairs": 0, 00:16:20.588 "current_io_qpairs": 1, 00:16:20.588 "pending_bdev_io": 0, 00:16:20.588 "completed_nvme_io": 20753, 00:16:20.588 "transports": [ 00:16:20.588 { 00:16:20.588 "trtype": "TCP" 00:16:20.588 } 00:16:20.588 ] 00:16:20.588 } 00:16:20.588 ] 00:16:20.588 }' 00:16:20.588 01:03:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:16:20.588 01:03:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:16:20.588 01:03:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:16:20.588 01:03:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:16:20.588 01:03:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1271589 00:16:28.703 Initializing NVMe Controllers 00:16:28.703 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:28.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:16:28.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:16:28.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:16:28.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:16:28.703 Initialization complete. Launching workers. 00:16:28.703 ======================================================== 00:16:28.703 Latency(us) 00:16:28.703 Device Information : IOPS MiB/s Average min max 00:16:28.703 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10916.08 42.64 5863.71 2026.14 8664.97 00:16:28.703 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10529.19 41.13 6078.53 1159.51 10425.72 00:16:28.703 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10869.68 42.46 5907.57 1979.48 46216.03 00:16:28.703 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7917.52 30.93 8084.67 1417.53 13415.51 00:16:28.703 ======================================================== 00:16:28.703 Total : 40232.47 157.16 6368.85 1159.51 46216.03 00:16:28.703 00:16:28.703 01:03:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:16:28.703 01:03:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:28.703 01:03:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:16:28.703 01:03:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:28.703 01:03:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:16:28.703 01:03:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:28.703 01:03:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:28.703 rmmod nvme_tcp 00:16:28.703 rmmod nvme_fabrics 00:16:28.703 rmmod nvme_keyring 00:16:28.703 01:03:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:28.703 01:03:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:16:28.703 01:03:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:16:28.703 01:03:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1271437 ']' 00:16:28.703 01:03:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1271437 00:16:28.703 01:03:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 1271437 ']' 00:16:28.703 01:03:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 1271437 00:16:28.703 01:03:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:16:28.703 01:03:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:28.703 01:03:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1271437 00:16:28.703 01:03:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:28.703 01:03:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:28.703 01:03:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1271437' 00:16:28.703 killing process with pid 1271437 00:16:28.703 01:03:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 1271437 00:16:28.703 [2024-05-15 01:03:40.614359] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:28.703 01:03:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 1271437 00:16:28.703 01:03:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:28.703 01:03:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:28.703 01:03:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:28.703 01:03:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:28.703 01:03:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:28.703 01:03:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.703 01:03:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:28.703 01:03:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.602 01:03:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:30.602 01:03:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:16:30.602 01:03:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:16:31.554 01:03:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:16:32.928 01:03:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:38.198 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:38.198 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:38.198 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:38.198 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:38.198 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:38.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:38.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:16:38.199 00:16:38.199 --- 10.0.0.2 ping statistics --- 00:16:38.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.199 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:38.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:38.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:16:38.199 00:16:38.199 --- 10.0.0.1 ping statistics --- 00:16:38.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.199 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:16:38.199 net.core.busy_poll = 1 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:16:38.199 net.core.busy_read = 1 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1274082 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1274082 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 1274082 ']' 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:38.199 01:03:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:38.199 [2024-05-15 01:03:50.392944] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:16:38.199 [2024-05-15 01:03:50.393028] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:38.199 EAL: No free 2048 kB hugepages reported on node 1 00:16:38.199 [2024-05-15 01:03:50.473712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:38.199 [2024-05-15 01:03:50.588165] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:38.199 [2024-05-15 01:03:50.588238] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:38.199 [2024-05-15 01:03:50.588253] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:38.199 [2024-05-15 01:03:50.588265] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:38.199 [2024-05-15 01:03:50.588290] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:38.199 [2024-05-15 01:03:50.588344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:38.199 [2024-05-15 01:03:50.588395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:38.199 [2024-05-15 01:03:50.588418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:38.199 [2024-05-15 01:03:50.588420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.133 01:03:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:39.133 01:03:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:16:39.133 01:03:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:39.133 01:03:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:39.133 01:03:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:39.133 01:03:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:39.133 01:03:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:16:39.133 01:03:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:16:39.133 01:03:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.133 01:03:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:16:39.133 01:03:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:39.133 01:03:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.133 01:03:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:16:39.133 01:03:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:16:39.133 01:03:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.133 01:03:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:39.133 01:03:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.133 01:03:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:16:39.133 01:03:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.133 01:03:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:39.133 01:03:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.133 01:03:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:16:39.133 01:03:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.133 01:03:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:39.134 [2024-05-15 01:03:51.515502] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:39.134 01:03:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.134 01:03:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:39.134 01:03:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.134 01:03:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:39.392 Malloc1 00:16:39.392 01:03:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.392 01:03:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:39.392 01:03:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.392 01:03:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:39.392 01:03:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.392 01:03:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:39.392 01:03:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.392 01:03:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:39.392 01:03:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.392 01:03:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:39.392 01:03:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.392 01:03:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:39.392 [2024-05-15 01:03:51.567295] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:39.392 [2024-05-15 01:03:51.567586] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:39.392 01:03:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.392 01:03:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1274282 00:16:39.392 01:03:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:16:39.392 01:03:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:39.392 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.322 01:03:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:16:41.322 01:03:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.322 01:03:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:41.322 01:03:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.322 01:03:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:16:41.322 "tick_rate": 2700000000, 00:16:41.322 "poll_groups": [ 00:16:41.322 { 00:16:41.322 "name": "nvmf_tgt_poll_group_000", 00:16:41.322 "admin_qpairs": 1, 00:16:41.322 "io_qpairs": 2, 00:16:41.322 "current_admin_qpairs": 1, 00:16:41.322 "current_io_qpairs": 2, 00:16:41.322 "pending_bdev_io": 0, 00:16:41.322 "completed_nvme_io": 20483, 00:16:41.322 "transports": [ 00:16:41.322 { 00:16:41.322 "trtype": "TCP" 00:16:41.322 } 00:16:41.322 ] 00:16:41.322 }, 00:16:41.322 { 00:16:41.322 "name": "nvmf_tgt_poll_group_001", 00:16:41.322 "admin_qpairs": 0, 00:16:41.322 "io_qpairs": 2, 00:16:41.322 "current_admin_qpairs": 0, 00:16:41.322 "current_io_qpairs": 2, 00:16:41.322 "pending_bdev_io": 0, 00:16:41.322 "completed_nvme_io": 27115, 00:16:41.322 "transports": [ 00:16:41.322 { 00:16:41.322 "trtype": "TCP" 00:16:41.322 } 00:16:41.322 ] 00:16:41.322 }, 00:16:41.322 { 00:16:41.322 "name": "nvmf_tgt_poll_group_002", 00:16:41.322 "admin_qpairs": 0, 00:16:41.322 "io_qpairs": 0, 00:16:41.322 "current_admin_qpairs": 0, 00:16:41.322 "current_io_qpairs": 0, 00:16:41.322 "pending_bdev_io": 0, 00:16:41.322 "completed_nvme_io": 0, 00:16:41.322 "transports": [ 00:16:41.322 { 00:16:41.322 "trtype": "TCP" 00:16:41.322 } 00:16:41.322 ] 00:16:41.322 }, 00:16:41.322 { 00:16:41.322 "name": "nvmf_tgt_poll_group_003", 00:16:41.322 "admin_qpairs": 0, 00:16:41.322 "io_qpairs": 0, 00:16:41.322 "current_admin_qpairs": 0, 00:16:41.322 "current_io_qpairs": 0, 00:16:41.322 "pending_bdev_io": 0, 00:16:41.322 "completed_nvme_io": 0, 00:16:41.322 "transports": [ 00:16:41.322 { 00:16:41.322 "trtype": "TCP" 00:16:41.322 } 00:16:41.322 ] 00:16:41.322 } 00:16:41.322 ] 00:16:41.322 }' 00:16:41.322 01:03:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:16:41.322 01:03:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:16:41.322 01:03:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:16:41.322 01:03:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:16:41.322 01:03:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1274282 00:16:49.429 Initializing NVMe Controllers 00:16:49.429 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:49.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:16:49.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:16:49.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:16:49.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:16:49.429 Initialization complete. Launching workers. 00:16:49.429 ======================================================== 00:16:49.429 Latency(us) 00:16:49.429 Device Information : IOPS MiB/s Average min max 00:16:49.429 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5368.70 20.97 11936.71 1910.69 58003.75 00:16:49.429 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6628.90 25.89 9655.32 2062.31 56100.32 00:16:49.429 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5498.20 21.48 11640.55 2307.12 58343.13 00:16:49.429 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7688.00 30.03 8326.34 1774.58 53904.63 00:16:49.429 ======================================================== 00:16:49.429 Total : 25183.80 98.37 10169.38 1774.58 58343.13 00:16:49.429 00:16:49.429 01:04:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:16:49.429 01:04:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:49.429 01:04:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:16:49.429 01:04:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:49.429 01:04:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:16:49.429 01:04:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:49.429 01:04:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:49.429 rmmod nvme_tcp 00:16:49.429 rmmod nvme_fabrics 00:16:49.429 rmmod nvme_keyring 00:16:49.429 01:04:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:49.429 01:04:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:16:49.429 01:04:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:16:49.429 01:04:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1274082 ']' 00:16:49.429 01:04:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1274082 00:16:49.429 01:04:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 1274082 ']' 00:16:49.429 01:04:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 1274082 00:16:49.429 01:04:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:16:49.429 01:04:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:49.429 01:04:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1274082 00:16:49.429 01:04:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:49.429 01:04:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:49.429 01:04:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1274082' 00:16:49.429 killing process with pid 1274082 00:16:49.429 01:04:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 1274082 00:16:49.429 [2024-05-15 01:04:01.817699] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:49.429 01:04:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 1274082 00:16:49.994 01:04:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:49.994 01:04:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:49.994 01:04:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:49.994 01:04:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:49.994 01:04:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:49.994 01:04:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.994 01:04:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.994 01:04:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.280 01:04:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:53.280 01:04:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:53.280 00:16:53.280 real 0m45.228s 00:16:53.280 user 2m29.884s 00:16:53.280 sys 0m14.951s 00:16:53.280 01:04:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:53.280 01:04:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:53.280 ************************************ 00:16:53.280 END TEST nvmf_perf_adq 00:16:53.280 ************************************ 00:16:53.280 01:04:05 nvmf_tcp -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:16:53.280 01:04:05 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:53.280 01:04:05 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:53.280 01:04:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:53.280 ************************************ 00:16:53.280 START TEST nvmf_shutdown 00:16:53.280 ************************************ 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:16:53.280 * Looking for test storage... 00:16:53.280 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:16:53.280 ************************************ 00:16:53.280 START TEST nvmf_shutdown_tc1 00:16:53.280 ************************************ 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:53.280 01:04:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:53.281 01:04:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:53.281 01:04:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:55.812 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:55.812 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:55.812 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:55.813 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:55.813 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:55.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:55.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:16:55.813 00:16:55.813 --- 10.0.0.2 ping statistics --- 00:16:55.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.813 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:55.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:55.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:16:55.813 00:16:55.813 --- 10.0.0.1 ping statistics --- 00:16:55.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.813 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1277951 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1277951 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 1277951 ']' 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:55.813 01:04:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:55.813 [2024-05-15 01:04:07.940319] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:16:55.813 [2024-05-15 01:04:07.940384] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:55.813 EAL: No free 2048 kB hugepages reported on node 1 00:16:55.813 [2024-05-15 01:04:08.017287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:55.813 [2024-05-15 01:04:08.134559] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:55.813 [2024-05-15 01:04:08.134622] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:55.813 [2024-05-15 01:04:08.134639] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:55.813 [2024-05-15 01:04:08.134653] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:55.813 [2024-05-15 01:04:08.134666] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:55.813 [2024-05-15 01:04:08.134750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:55.813 [2024-05-15 01:04:08.134866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:55.813 [2024-05-15 01:04:08.134946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:55.813 [2024-05-15 01:04:08.134965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:56.072 [2024-05-15 01:04:08.299801] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.072 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:56.072 Malloc1 00:16:56.072 [2024-05-15 01:04:08.389158] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:56.072 [2024-05-15 01:04:08.389484] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.072 Malloc2 00:16:56.331 Malloc3 00:16:56.331 Malloc4 00:16:56.331 Malloc5 00:16:56.331 Malloc6 00:16:56.331 Malloc7 00:16:56.331 Malloc8 00:16:56.590 Malloc9 00:16:56.590 Malloc10 00:16:56.590 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.590 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:16:56.590 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:56.590 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:56.590 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1278127 00:16:56.590 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1278127 /var/tmp/bdevperf.sock 00:16:56.590 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 1278127 ']' 00:16:56.590 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:56.590 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:16:56.590 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:56.590 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:56.590 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:56.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:56.590 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:16:56.590 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:56.590 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:16:56.590 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:56.590 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:56.590 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:56.590 { 00:16:56.590 "params": { 00:16:56.590 "name": "Nvme$subsystem", 00:16:56.590 "trtype": "$TEST_TRANSPORT", 00:16:56.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:56.590 "adrfam": "ipv4", 00:16:56.590 "trsvcid": "$NVMF_PORT", 00:16:56.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:56.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:56.590 "hdgst": ${hdgst:-false}, 00:16:56.590 "ddgst": ${ddgst:-false} 00:16:56.590 }, 00:16:56.590 "method": "bdev_nvme_attach_controller" 00:16:56.590 } 00:16:56.590 EOF 00:16:56.590 )") 00:16:56.590 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:56.590 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:56.590 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:56.590 { 00:16:56.590 "params": { 00:16:56.590 "name": "Nvme$subsystem", 00:16:56.590 "trtype": "$TEST_TRANSPORT", 00:16:56.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:56.590 "adrfam": "ipv4", 00:16:56.590 "trsvcid": "$NVMF_PORT", 00:16:56.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:56.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:56.590 "hdgst": ${hdgst:-false}, 00:16:56.591 "ddgst": ${ddgst:-false} 00:16:56.591 }, 00:16:56.591 "method": "bdev_nvme_attach_controller" 00:16:56.591 } 00:16:56.591 EOF 00:16:56.591 )") 00:16:56.591 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:56.591 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:56.591 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:56.591 { 00:16:56.591 "params": { 00:16:56.591 "name": "Nvme$subsystem", 00:16:56.591 "trtype": "$TEST_TRANSPORT", 00:16:56.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:56.591 "adrfam": "ipv4", 00:16:56.591 "trsvcid": "$NVMF_PORT", 00:16:56.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:56.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:56.591 "hdgst": ${hdgst:-false}, 00:16:56.591 "ddgst": ${ddgst:-false} 00:16:56.591 }, 00:16:56.591 "method": "bdev_nvme_attach_controller" 00:16:56.591 } 00:16:56.591 EOF 00:16:56.591 )") 00:16:56.591 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:56.591 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:56.591 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:56.591 { 00:16:56.591 "params": { 00:16:56.591 "name": "Nvme$subsystem", 00:16:56.591 "trtype": "$TEST_TRANSPORT", 00:16:56.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:56.591 "adrfam": "ipv4", 00:16:56.591 "trsvcid": "$NVMF_PORT", 00:16:56.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:56.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:56.591 "hdgst": ${hdgst:-false}, 00:16:56.591 "ddgst": ${ddgst:-false} 00:16:56.591 }, 00:16:56.591 "method": "bdev_nvme_attach_controller" 00:16:56.591 } 00:16:56.591 EOF 00:16:56.591 )") 00:16:56.591 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:56.591 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:56.591 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:56.591 { 00:16:56.591 "params": { 00:16:56.591 "name": "Nvme$subsystem", 00:16:56.591 "trtype": "$TEST_TRANSPORT", 00:16:56.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:56.591 "adrfam": "ipv4", 00:16:56.591 "trsvcid": "$NVMF_PORT", 00:16:56.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:56.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:56.591 "hdgst": ${hdgst:-false}, 00:16:56.591 "ddgst": ${ddgst:-false} 00:16:56.591 }, 00:16:56.591 "method": "bdev_nvme_attach_controller" 00:16:56.591 } 00:16:56.591 EOF 00:16:56.591 )") 00:16:56.591 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:56.591 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:56.591 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:56.591 { 00:16:56.591 "params": { 00:16:56.591 "name": "Nvme$subsystem", 00:16:56.591 "trtype": "$TEST_TRANSPORT", 00:16:56.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:56.591 "adrfam": "ipv4", 00:16:56.591 "trsvcid": "$NVMF_PORT", 00:16:56.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:56.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:56.591 "hdgst": ${hdgst:-false}, 00:16:56.591 "ddgst": ${ddgst:-false} 00:16:56.591 }, 00:16:56.591 "method": "bdev_nvme_attach_controller" 00:16:56.591 } 00:16:56.591 EOF 00:16:56.591 )") 00:16:56.591 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:56.591 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:56.591 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:56.591 { 00:16:56.591 "params": { 00:16:56.591 "name": "Nvme$subsystem", 00:16:56.591 "trtype": "$TEST_TRANSPORT", 00:16:56.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:56.591 "adrfam": "ipv4", 00:16:56.591 "trsvcid": "$NVMF_PORT", 00:16:56.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:56.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:56.591 "hdgst": ${hdgst:-false}, 00:16:56.591 "ddgst": ${ddgst:-false} 00:16:56.591 }, 00:16:56.591 "method": "bdev_nvme_attach_controller" 00:16:56.591 } 00:16:56.591 EOF 00:16:56.591 )") 00:16:56.591 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:56.591 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:56.591 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:56.591 { 00:16:56.591 "params": { 00:16:56.591 "name": "Nvme$subsystem", 00:16:56.591 "trtype": "$TEST_TRANSPORT", 00:16:56.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:56.591 "adrfam": "ipv4", 00:16:56.591 "trsvcid": "$NVMF_PORT", 00:16:56.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:56.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:56.591 "hdgst": ${hdgst:-false}, 00:16:56.591 "ddgst": ${ddgst:-false} 00:16:56.591 }, 00:16:56.591 "method": "bdev_nvme_attach_controller" 00:16:56.591 } 00:16:56.591 EOF 00:16:56.591 )") 00:16:56.591 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:56.591 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:56.591 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:56.591 { 00:16:56.591 "params": { 00:16:56.591 "name": "Nvme$subsystem", 00:16:56.591 "trtype": "$TEST_TRANSPORT", 00:16:56.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:56.591 "adrfam": "ipv4", 00:16:56.591 "trsvcid": "$NVMF_PORT", 00:16:56.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:56.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:56.591 "hdgst": ${hdgst:-false}, 00:16:56.591 "ddgst": ${ddgst:-false} 00:16:56.591 }, 00:16:56.591 "method": "bdev_nvme_attach_controller" 00:16:56.591 } 00:16:56.591 EOF 00:16:56.591 )") 00:16:56.591 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:56.591 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:56.591 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:56.591 { 00:16:56.591 "params": { 00:16:56.591 "name": "Nvme$subsystem", 00:16:56.591 "trtype": "$TEST_TRANSPORT", 00:16:56.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:56.591 "adrfam": "ipv4", 00:16:56.591 "trsvcid": "$NVMF_PORT", 00:16:56.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:56.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:56.591 "hdgst": ${hdgst:-false}, 00:16:56.591 "ddgst": ${ddgst:-false} 00:16:56.591 }, 00:16:56.591 "method": "bdev_nvme_attach_controller" 00:16:56.591 } 00:16:56.591 EOF 00:16:56.591 )") 00:16:56.591 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:56.591 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:16:56.591 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:16:56.591 01:04:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:56.591 "params": { 00:16:56.591 "name": "Nvme1", 00:16:56.591 "trtype": "tcp", 00:16:56.591 "traddr": "10.0.0.2", 00:16:56.591 "adrfam": "ipv4", 00:16:56.591 "trsvcid": "4420", 00:16:56.591 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:56.591 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:56.591 "hdgst": false, 00:16:56.591 "ddgst": false 00:16:56.591 }, 00:16:56.591 "method": "bdev_nvme_attach_controller" 00:16:56.591 },{ 00:16:56.591 "params": { 00:16:56.591 "name": "Nvme2", 00:16:56.591 "trtype": "tcp", 00:16:56.591 "traddr": "10.0.0.2", 00:16:56.591 "adrfam": "ipv4", 00:16:56.591 "trsvcid": "4420", 00:16:56.591 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:56.591 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:56.591 "hdgst": false, 00:16:56.591 "ddgst": false 00:16:56.591 }, 00:16:56.591 "method": "bdev_nvme_attach_controller" 00:16:56.591 },{ 00:16:56.591 "params": { 00:16:56.591 "name": "Nvme3", 00:16:56.591 "trtype": "tcp", 00:16:56.591 "traddr": "10.0.0.2", 00:16:56.591 "adrfam": "ipv4", 00:16:56.591 "trsvcid": "4420", 00:16:56.591 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:56.591 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:56.591 "hdgst": false, 00:16:56.591 "ddgst": false 00:16:56.591 }, 00:16:56.591 "method": "bdev_nvme_attach_controller" 00:16:56.591 },{ 00:16:56.591 "params": { 00:16:56.591 "name": "Nvme4", 00:16:56.591 "trtype": "tcp", 00:16:56.591 "traddr": "10.0.0.2", 00:16:56.591 "adrfam": "ipv4", 00:16:56.591 "trsvcid": "4420", 00:16:56.591 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:56.591 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:56.591 "hdgst": false, 00:16:56.592 "ddgst": false 00:16:56.592 }, 00:16:56.592 "method": "bdev_nvme_attach_controller" 00:16:56.592 },{ 00:16:56.592 "params": { 00:16:56.592 "name": "Nvme5", 00:16:56.592 "trtype": "tcp", 00:16:56.592 "traddr": "10.0.0.2", 00:16:56.592 "adrfam": "ipv4", 00:16:56.592 "trsvcid": "4420", 00:16:56.592 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:56.592 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:56.592 "hdgst": false, 00:16:56.592 "ddgst": false 00:16:56.592 }, 00:16:56.592 "method": "bdev_nvme_attach_controller" 00:16:56.592 },{ 00:16:56.592 "params": { 00:16:56.592 "name": "Nvme6", 00:16:56.592 "trtype": "tcp", 00:16:56.592 "traddr": "10.0.0.2", 00:16:56.592 "adrfam": "ipv4", 00:16:56.592 "trsvcid": "4420", 00:16:56.592 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:56.592 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:56.592 "hdgst": false, 00:16:56.592 "ddgst": false 00:16:56.592 }, 00:16:56.592 "method": "bdev_nvme_attach_controller" 00:16:56.592 },{ 00:16:56.592 "params": { 00:16:56.592 "name": "Nvme7", 00:16:56.592 "trtype": "tcp", 00:16:56.592 "traddr": "10.0.0.2", 00:16:56.592 "adrfam": "ipv4", 00:16:56.592 "trsvcid": "4420", 00:16:56.592 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:56.592 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:56.592 "hdgst": false, 00:16:56.592 "ddgst": false 00:16:56.592 }, 00:16:56.592 "method": "bdev_nvme_attach_controller" 00:16:56.592 },{ 00:16:56.592 "params": { 00:16:56.592 "name": "Nvme8", 00:16:56.592 "trtype": "tcp", 00:16:56.592 "traddr": "10.0.0.2", 00:16:56.592 "adrfam": "ipv4", 00:16:56.592 "trsvcid": "4420", 00:16:56.592 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:56.592 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:56.592 "hdgst": false, 00:16:56.592 "ddgst": false 00:16:56.592 }, 00:16:56.592 "method": "bdev_nvme_attach_controller" 00:16:56.592 },{ 00:16:56.592 "params": { 00:16:56.592 "name": "Nvme9", 00:16:56.592 "trtype": "tcp", 00:16:56.592 "traddr": "10.0.0.2", 00:16:56.592 "adrfam": "ipv4", 00:16:56.592 "trsvcid": "4420", 00:16:56.592 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:56.592 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:56.592 "hdgst": false, 00:16:56.592 "ddgst": false 00:16:56.592 }, 00:16:56.592 "method": "bdev_nvme_attach_controller" 00:16:56.592 },{ 00:16:56.592 "params": { 00:16:56.592 "name": "Nvme10", 00:16:56.592 "trtype": "tcp", 00:16:56.592 "traddr": "10.0.0.2", 00:16:56.592 "adrfam": "ipv4", 00:16:56.592 "trsvcid": "4420", 00:16:56.592 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:56.592 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:56.592 "hdgst": false, 00:16:56.592 "ddgst": false 00:16:56.592 }, 00:16:56.592 "method": "bdev_nvme_attach_controller" 00:16:56.592 }' 00:16:56.592 [2024-05-15 01:04:08.907835] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:16:56.592 [2024-05-15 01:04:08.907910] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:56.592 EAL: No free 2048 kB hugepages reported on node 1 00:16:56.850 [2024-05-15 01:04:08.983152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.850 [2024-05-15 01:04:09.093000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.223 01:04:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:58.223 01:04:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:16:58.223 01:04:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:58.223 01:04:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.223 01:04:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:58.223 01:04:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.223 01:04:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1278127 00:16:58.223 01:04:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:16:58.223 01:04:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:16:59.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1278127 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:16:59.595 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1277951 00:16:59.595 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:59.595 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:59.595 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:16:59.595 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:16:59.595 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:59.595 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:59.595 { 00:16:59.595 "params": { 00:16:59.595 "name": "Nvme$subsystem", 00:16:59.595 "trtype": "$TEST_TRANSPORT", 00:16:59.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:59.595 "adrfam": "ipv4", 00:16:59.595 "trsvcid": "$NVMF_PORT", 00:16:59.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:59.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:59.595 "hdgst": ${hdgst:-false}, 00:16:59.595 "ddgst": ${ddgst:-false} 00:16:59.595 }, 00:16:59.595 "method": "bdev_nvme_attach_controller" 00:16:59.595 } 00:16:59.595 EOF 00:16:59.595 )") 00:16:59.595 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:59.595 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:59.595 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:59.595 { 00:16:59.595 "params": { 00:16:59.595 "name": "Nvme$subsystem", 00:16:59.595 "trtype": "$TEST_TRANSPORT", 00:16:59.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:59.595 "adrfam": "ipv4", 00:16:59.595 "trsvcid": "$NVMF_PORT", 00:16:59.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:59.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:59.595 "hdgst": ${hdgst:-false}, 00:16:59.595 "ddgst": ${ddgst:-false} 00:16:59.595 }, 00:16:59.595 "method": "bdev_nvme_attach_controller" 00:16:59.595 } 00:16:59.595 EOF 00:16:59.595 )") 00:16:59.595 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:59.595 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:59.595 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:59.595 { 00:16:59.595 "params": { 00:16:59.595 "name": "Nvme$subsystem", 00:16:59.595 "trtype": "$TEST_TRANSPORT", 00:16:59.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:59.595 "adrfam": "ipv4", 00:16:59.595 "trsvcid": "$NVMF_PORT", 00:16:59.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:59.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:59.595 "hdgst": ${hdgst:-false}, 00:16:59.595 "ddgst": ${ddgst:-false} 00:16:59.595 }, 00:16:59.595 "method": "bdev_nvme_attach_controller" 00:16:59.595 } 00:16:59.595 EOF 00:16:59.595 )") 00:16:59.595 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:59.595 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:59.595 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:59.595 { 00:16:59.595 "params": { 00:16:59.595 "name": "Nvme$subsystem", 00:16:59.595 "trtype": "$TEST_TRANSPORT", 00:16:59.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:59.595 "adrfam": "ipv4", 00:16:59.595 "trsvcid": "$NVMF_PORT", 00:16:59.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:59.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:59.595 "hdgst": ${hdgst:-false}, 00:16:59.595 "ddgst": ${ddgst:-false} 00:16:59.595 }, 00:16:59.595 "method": "bdev_nvme_attach_controller" 00:16:59.595 } 00:16:59.595 EOF 00:16:59.595 )") 00:16:59.595 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:59.595 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:59.595 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:59.595 { 00:16:59.595 "params": { 00:16:59.595 "name": "Nvme$subsystem", 00:16:59.595 "trtype": "$TEST_TRANSPORT", 00:16:59.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:59.595 "adrfam": "ipv4", 00:16:59.595 "trsvcid": "$NVMF_PORT", 00:16:59.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:59.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:59.595 "hdgst": ${hdgst:-false}, 00:16:59.595 "ddgst": ${ddgst:-false} 00:16:59.595 }, 00:16:59.595 "method": "bdev_nvme_attach_controller" 00:16:59.595 } 00:16:59.595 EOF 00:16:59.595 )") 00:16:59.595 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:59.595 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:59.595 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:59.595 { 00:16:59.595 "params": { 00:16:59.595 "name": "Nvme$subsystem", 00:16:59.595 "trtype": "$TEST_TRANSPORT", 00:16:59.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:59.595 "adrfam": "ipv4", 00:16:59.595 "trsvcid": "$NVMF_PORT", 00:16:59.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:59.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:59.595 "hdgst": ${hdgst:-false}, 00:16:59.595 "ddgst": ${ddgst:-false} 00:16:59.595 }, 00:16:59.595 "method": "bdev_nvme_attach_controller" 00:16:59.595 } 00:16:59.595 EOF 00:16:59.595 )") 00:16:59.595 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:59.595 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:59.595 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:59.595 { 00:16:59.595 "params": { 00:16:59.595 "name": "Nvme$subsystem", 00:16:59.595 "trtype": "$TEST_TRANSPORT", 00:16:59.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:59.595 "adrfam": "ipv4", 00:16:59.595 "trsvcid": "$NVMF_PORT", 00:16:59.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:59.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:59.595 "hdgst": ${hdgst:-false}, 00:16:59.595 "ddgst": ${ddgst:-false} 00:16:59.595 }, 00:16:59.595 "method": "bdev_nvme_attach_controller" 00:16:59.595 } 00:16:59.595 EOF 00:16:59.595 )") 00:16:59.595 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:59.595 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:59.595 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:59.595 { 00:16:59.595 "params": { 00:16:59.595 "name": "Nvme$subsystem", 00:16:59.595 "trtype": "$TEST_TRANSPORT", 00:16:59.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:59.595 "adrfam": "ipv4", 00:16:59.595 "trsvcid": "$NVMF_PORT", 00:16:59.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:59.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:59.595 "hdgst": ${hdgst:-false}, 00:16:59.595 "ddgst": ${ddgst:-false} 00:16:59.595 }, 00:16:59.595 "method": "bdev_nvme_attach_controller" 00:16:59.595 } 00:16:59.595 EOF 00:16:59.595 )") 00:16:59.595 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:59.595 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:59.595 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:59.595 { 00:16:59.595 "params": { 00:16:59.595 "name": "Nvme$subsystem", 00:16:59.595 "trtype": "$TEST_TRANSPORT", 00:16:59.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:59.595 "adrfam": "ipv4", 00:16:59.595 "trsvcid": "$NVMF_PORT", 00:16:59.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:59.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:59.596 "hdgst": ${hdgst:-false}, 00:16:59.596 "ddgst": ${ddgst:-false} 00:16:59.596 }, 00:16:59.596 "method": "bdev_nvme_attach_controller" 00:16:59.596 } 00:16:59.596 EOF 00:16:59.596 )") 00:16:59.596 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:59.596 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:59.596 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:59.596 { 00:16:59.596 "params": { 00:16:59.596 "name": "Nvme$subsystem", 00:16:59.596 "trtype": "$TEST_TRANSPORT", 00:16:59.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:59.596 "adrfam": "ipv4", 00:16:59.596 "trsvcid": "$NVMF_PORT", 00:16:59.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:59.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:59.596 "hdgst": ${hdgst:-false}, 00:16:59.596 "ddgst": ${ddgst:-false} 00:16:59.596 }, 00:16:59.596 "method": "bdev_nvme_attach_controller" 00:16:59.596 } 00:16:59.596 EOF 00:16:59.596 )") 00:16:59.596 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:59.596 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:16:59.596 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:16:59.596 01:04:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:59.596 "params": { 00:16:59.596 "name": "Nvme1", 00:16:59.596 "trtype": "tcp", 00:16:59.596 "traddr": "10.0.0.2", 00:16:59.596 "adrfam": "ipv4", 00:16:59.596 "trsvcid": "4420", 00:16:59.596 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.596 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:59.596 "hdgst": false, 00:16:59.596 "ddgst": false 00:16:59.596 }, 00:16:59.596 "method": "bdev_nvme_attach_controller" 00:16:59.596 },{ 00:16:59.596 "params": { 00:16:59.596 "name": "Nvme2", 00:16:59.596 "trtype": "tcp", 00:16:59.596 "traddr": "10.0.0.2", 00:16:59.596 "adrfam": "ipv4", 00:16:59.596 "trsvcid": "4420", 00:16:59.596 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:59.596 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:59.596 "hdgst": false, 00:16:59.596 "ddgst": false 00:16:59.596 }, 00:16:59.596 "method": "bdev_nvme_attach_controller" 00:16:59.596 },{ 00:16:59.596 "params": { 00:16:59.596 "name": "Nvme3", 00:16:59.596 "trtype": "tcp", 00:16:59.596 "traddr": "10.0.0.2", 00:16:59.596 "adrfam": "ipv4", 00:16:59.596 "trsvcid": "4420", 00:16:59.596 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:59.596 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:59.596 "hdgst": false, 00:16:59.596 "ddgst": false 00:16:59.596 }, 00:16:59.596 "method": "bdev_nvme_attach_controller" 00:16:59.596 },{ 00:16:59.596 "params": { 00:16:59.596 "name": "Nvme4", 00:16:59.596 "trtype": "tcp", 00:16:59.596 "traddr": "10.0.0.2", 00:16:59.596 "adrfam": "ipv4", 00:16:59.596 "trsvcid": "4420", 00:16:59.596 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:59.596 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:59.596 "hdgst": false, 00:16:59.596 "ddgst": false 00:16:59.596 }, 00:16:59.596 "method": "bdev_nvme_attach_controller" 00:16:59.596 },{ 00:16:59.596 "params": { 00:16:59.596 "name": "Nvme5", 00:16:59.596 "trtype": "tcp", 00:16:59.596 "traddr": "10.0.0.2", 00:16:59.596 "adrfam": "ipv4", 00:16:59.596 "trsvcid": "4420", 00:16:59.596 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:59.596 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:59.596 "hdgst": false, 00:16:59.596 "ddgst": false 00:16:59.596 }, 00:16:59.596 "method": "bdev_nvme_attach_controller" 00:16:59.596 },{ 00:16:59.596 "params": { 00:16:59.596 "name": "Nvme6", 00:16:59.596 "trtype": "tcp", 00:16:59.596 "traddr": "10.0.0.2", 00:16:59.596 "adrfam": "ipv4", 00:16:59.596 "trsvcid": "4420", 00:16:59.596 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:59.596 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:59.596 "hdgst": false, 00:16:59.596 "ddgst": false 00:16:59.596 }, 00:16:59.596 "method": "bdev_nvme_attach_controller" 00:16:59.596 },{ 00:16:59.596 "params": { 00:16:59.596 "name": "Nvme7", 00:16:59.596 "trtype": "tcp", 00:16:59.596 "traddr": "10.0.0.2", 00:16:59.596 "adrfam": "ipv4", 00:16:59.596 "trsvcid": "4420", 00:16:59.596 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:59.596 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:59.596 "hdgst": false, 00:16:59.596 "ddgst": false 00:16:59.596 }, 00:16:59.596 "method": "bdev_nvme_attach_controller" 00:16:59.596 },{ 00:16:59.596 "params": { 00:16:59.596 "name": "Nvme8", 00:16:59.596 "trtype": "tcp", 00:16:59.596 "traddr": "10.0.0.2", 00:16:59.596 "adrfam": "ipv4", 00:16:59.596 "trsvcid": "4420", 00:16:59.596 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:59.596 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:59.596 "hdgst": false, 00:16:59.596 "ddgst": false 00:16:59.596 }, 00:16:59.596 "method": "bdev_nvme_attach_controller" 00:16:59.596 },{ 00:16:59.596 "params": { 00:16:59.596 "name": "Nvme9", 00:16:59.596 "trtype": "tcp", 00:16:59.596 "traddr": "10.0.0.2", 00:16:59.596 "adrfam": "ipv4", 00:16:59.596 "trsvcid": "4420", 00:16:59.596 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:59.596 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:59.596 "hdgst": false, 00:16:59.596 "ddgst": false 00:16:59.596 }, 00:16:59.596 "method": "bdev_nvme_attach_controller" 00:16:59.596 },{ 00:16:59.596 "params": { 00:16:59.596 "name": "Nvme10", 00:16:59.596 "trtype": "tcp", 00:16:59.596 "traddr": "10.0.0.2", 00:16:59.596 "adrfam": "ipv4", 00:16:59.596 "trsvcid": "4420", 00:16:59.596 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:59.596 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:59.596 "hdgst": false, 00:16:59.596 "ddgst": false 00:16:59.596 }, 00:16:59.596 "method": "bdev_nvme_attach_controller" 00:16:59.596 }' 00:16:59.596 [2024-05-15 01:04:11.653865] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:16:59.596 [2024-05-15 01:04:11.653989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1278423 ] 00:16:59.596 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.596 [2024-05-15 01:04:11.731902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.596 [2024-05-15 01:04:11.845822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.976 Running I/O for 1 seconds... 00:17:02.360 00:17:02.360 Latency(us) 00:17:02.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.361 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:02.361 Verification LBA range: start 0x0 length 0x400 00:17:02.361 Nvme1n1 : 1.13 227.25 14.20 0.00 0.00 278884.50 20971.52 271853.04 00:17:02.361 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:02.361 Verification LBA range: start 0x0 length 0x400 00:17:02.361 Nvme2n1 : 1.11 235.97 14.75 0.00 0.00 262645.20 11505.21 267192.70 00:17:02.361 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:02.361 Verification LBA range: start 0x0 length 0x400 00:17:02.361 Nvme3n1 : 1.06 180.71 11.29 0.00 0.00 338162.22 25631.86 287387.50 00:17:02.361 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:02.361 Verification LBA range: start 0x0 length 0x400 00:17:02.361 Nvme4n1 : 1.08 177.97 11.12 0.00 0.00 337463.37 21845.33 326223.64 00:17:02.361 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:02.361 Verification LBA range: start 0x0 length 0x400 00:17:02.361 Nvme5n1 : 1.17 219.05 13.69 0.00 0.00 270955.71 25243.50 259425.47 00:17:02.361 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:02.361 Verification LBA range: start 0x0 length 0x400 00:17:02.361 Nvme6n1 : 1.16 220.02 13.75 0.00 0.00 264976.69 22913.33 292047.83 00:17:02.361 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:02.361 Verification LBA range: start 0x0 length 0x400 00:17:02.361 Nvme7n1 : 1.15 222.28 13.89 0.00 0.00 253209.22 21845.33 271853.04 00:17:02.361 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:02.361 Verification LBA range: start 0x0 length 0x400 00:17:02.361 Nvme8n1 : 1.18 271.18 16.95 0.00 0.00 208166.15 18932.62 278066.82 00:17:02.361 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:02.361 Verification LBA range: start 0x0 length 0x400 00:17:02.361 Nvme9n1 : 1.17 218.27 13.64 0.00 0.00 254144.47 23592.96 274959.93 00:17:02.361 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:02.361 Verification LBA range: start 0x0 length 0x400 00:17:02.361 Nvme10n1 : 1.19 268.91 16.81 0.00 0.00 203137.63 14466.47 240784.12 00:17:02.361 =================================================================================================================== 00:17:02.361 Total : 2241.60 140.10 0.00 0.00 260571.33 11505.21 326223.64 00:17:02.647 01:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:17:02.647 01:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:17:02.647 01:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:02.647 01:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:02.647 01:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:17:02.647 01:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:02.647 01:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:17:02.647 01:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:02.647 01:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:17:02.647 01:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:02.647 01:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:02.647 rmmod nvme_tcp 00:17:02.648 rmmod nvme_fabrics 00:17:02.648 rmmod nvme_keyring 00:17:02.648 01:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:02.648 01:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:17:02.648 01:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:17:02.648 01:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1277951 ']' 00:17:02.648 01:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1277951 00:17:02.648 01:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 1277951 ']' 00:17:02.648 01:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 1277951 00:17:02.648 01:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:17:02.648 01:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:02.648 01:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1277951 00:17:02.648 01:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:02.648 01:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:02.648 01:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1277951' 00:17:02.648 killing process with pid 1277951 00:17:02.648 01:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 1277951 00:17:02.648 [2024-05-15 01:04:14.931174] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:02.648 01:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 1277951 00:17:03.215 01:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:03.215 01:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:03.215 01:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:03.215 01:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:03.215 01:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:03.215 01:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.215 01:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:03.215 01:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:05.747 00:17:05.747 real 0m12.192s 00:17:05.747 user 0m33.870s 00:17:05.747 sys 0m3.595s 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:17:05.747 ************************************ 00:17:05.747 END TEST nvmf_shutdown_tc1 00:17:05.747 ************************************ 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:17:05.747 ************************************ 00:17:05.747 START TEST nvmf_shutdown_tc2 00:17:05.747 ************************************ 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:05.747 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:05.748 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:05.748 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:05.748 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:05.748 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:05.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:05.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:17:05.748 00:17:05.748 --- 10.0.0.2 ping statistics --- 00:17:05.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.748 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:05.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:05.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:17:05.748 00:17:05.748 --- 10.0.0.1 ping statistics --- 00:17:05.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.748 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1279319 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1279319 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1279319 ']' 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:05.748 01:04:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:05.748 [2024-05-15 01:04:17.813090] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:17:05.748 [2024-05-15 01:04:17.813166] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.748 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.748 [2024-05-15 01:04:17.908730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:05.748 [2024-05-15 01:04:18.022090] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.749 [2024-05-15 01:04:18.022156] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.749 [2024-05-15 01:04:18.022172] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:05.749 [2024-05-15 01:04:18.022185] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:05.749 [2024-05-15 01:04:18.022196] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.749 [2024-05-15 01:04:18.022300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:05.749 [2024-05-15 01:04:18.022395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:05.749 [2024-05-15 01:04:18.022461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:05.749 [2024-05-15 01:04:18.022463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:06.681 [2024-05-15 01:04:18.789842] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.681 01:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:06.681 Malloc1 00:17:06.681 [2024-05-15 01:04:18.864769] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:06.681 [2024-05-15 01:04:18.865097] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:06.681 Malloc2 00:17:06.681 Malloc3 00:17:06.681 Malloc4 00:17:06.681 Malloc5 00:17:06.940 Malloc6 00:17:06.940 Malloc7 00:17:06.940 Malloc8 00:17:06.940 Malloc9 00:17:06.940 Malloc10 00:17:06.940 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.940 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:17:06.940 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:06.940 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:06.940 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1279509 00:17:06.940 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1279509 /var/tmp/bdevperf.sock 00:17:06.940 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1279509 ']' 00:17:06.940 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:06.940 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:06.940 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:06.940 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:06.940 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:06.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:06.940 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:17:06.940 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:06.940 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:17:06.940 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:06.940 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:06.940 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:06.940 { 00:17:06.940 "params": { 00:17:06.940 "name": "Nvme$subsystem", 00:17:06.940 "trtype": "$TEST_TRANSPORT", 00:17:06.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:06.940 "adrfam": "ipv4", 00:17:06.940 "trsvcid": "$NVMF_PORT", 00:17:06.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:06.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:06.940 "hdgst": ${hdgst:-false}, 00:17:06.940 "ddgst": ${ddgst:-false} 00:17:06.940 }, 00:17:06.940 "method": "bdev_nvme_attach_controller" 00:17:06.940 } 00:17:06.940 EOF 00:17:06.940 )") 00:17:06.940 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:17:06.940 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:06.940 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:06.940 { 00:17:06.940 "params": { 00:17:06.940 "name": "Nvme$subsystem", 00:17:06.940 "trtype": "$TEST_TRANSPORT", 00:17:06.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:06.940 "adrfam": "ipv4", 00:17:06.940 "trsvcid": "$NVMF_PORT", 00:17:06.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:06.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:06.940 "hdgst": ${hdgst:-false}, 00:17:06.940 "ddgst": ${ddgst:-false} 00:17:06.940 }, 00:17:06.940 "method": "bdev_nvme_attach_controller" 00:17:06.940 } 00:17:06.940 EOF 00:17:06.940 )") 00:17:06.940 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:17:07.198 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:07.198 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:07.198 { 00:17:07.198 "params": { 00:17:07.198 "name": "Nvme$subsystem", 00:17:07.198 "trtype": "$TEST_TRANSPORT", 00:17:07.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.198 "adrfam": "ipv4", 00:17:07.198 "trsvcid": "$NVMF_PORT", 00:17:07.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.198 "hdgst": ${hdgst:-false}, 00:17:07.198 "ddgst": ${ddgst:-false} 00:17:07.198 }, 00:17:07.198 "method": "bdev_nvme_attach_controller" 00:17:07.198 } 00:17:07.198 EOF 00:17:07.198 )") 00:17:07.198 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:17:07.198 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:07.198 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:07.198 { 00:17:07.198 "params": { 00:17:07.198 "name": "Nvme$subsystem", 00:17:07.198 "trtype": "$TEST_TRANSPORT", 00:17:07.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.198 "adrfam": "ipv4", 00:17:07.198 "trsvcid": "$NVMF_PORT", 00:17:07.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.198 "hdgst": ${hdgst:-false}, 00:17:07.198 "ddgst": ${ddgst:-false} 00:17:07.198 }, 00:17:07.198 "method": "bdev_nvme_attach_controller" 00:17:07.198 } 00:17:07.198 EOF 00:17:07.198 )") 00:17:07.198 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:17:07.198 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:07.198 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:07.198 { 00:17:07.198 "params": { 00:17:07.198 "name": "Nvme$subsystem", 00:17:07.198 "trtype": "$TEST_TRANSPORT", 00:17:07.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.198 "adrfam": "ipv4", 00:17:07.198 "trsvcid": "$NVMF_PORT", 00:17:07.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.198 "hdgst": ${hdgst:-false}, 00:17:07.198 "ddgst": ${ddgst:-false} 00:17:07.198 }, 00:17:07.198 "method": "bdev_nvme_attach_controller" 00:17:07.198 } 00:17:07.198 EOF 00:17:07.198 )") 00:17:07.198 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:17:07.198 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:07.198 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:07.198 { 00:17:07.198 "params": { 00:17:07.198 "name": "Nvme$subsystem", 00:17:07.198 "trtype": "$TEST_TRANSPORT", 00:17:07.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.198 "adrfam": "ipv4", 00:17:07.198 "trsvcid": "$NVMF_PORT", 00:17:07.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.198 "hdgst": ${hdgst:-false}, 00:17:07.198 "ddgst": ${ddgst:-false} 00:17:07.198 }, 00:17:07.198 "method": "bdev_nvme_attach_controller" 00:17:07.198 } 00:17:07.198 EOF 00:17:07.198 )") 00:17:07.198 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:17:07.198 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:07.198 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:07.198 { 00:17:07.198 "params": { 00:17:07.198 "name": "Nvme$subsystem", 00:17:07.198 "trtype": "$TEST_TRANSPORT", 00:17:07.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.198 "adrfam": "ipv4", 00:17:07.198 "trsvcid": "$NVMF_PORT", 00:17:07.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.198 "hdgst": ${hdgst:-false}, 00:17:07.198 "ddgst": ${ddgst:-false} 00:17:07.198 }, 00:17:07.198 "method": "bdev_nvme_attach_controller" 00:17:07.198 } 00:17:07.199 EOF 00:17:07.199 )") 00:17:07.199 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:17:07.199 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:07.199 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:07.199 { 00:17:07.199 "params": { 00:17:07.199 "name": "Nvme$subsystem", 00:17:07.199 "trtype": "$TEST_TRANSPORT", 00:17:07.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.199 "adrfam": "ipv4", 00:17:07.199 "trsvcid": "$NVMF_PORT", 00:17:07.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.199 "hdgst": ${hdgst:-false}, 00:17:07.199 "ddgst": ${ddgst:-false} 00:17:07.199 }, 00:17:07.199 "method": "bdev_nvme_attach_controller" 00:17:07.199 } 00:17:07.199 EOF 00:17:07.199 )") 00:17:07.199 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:17:07.199 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:07.199 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:07.199 { 00:17:07.199 "params": { 00:17:07.199 "name": "Nvme$subsystem", 00:17:07.199 "trtype": "$TEST_TRANSPORT", 00:17:07.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.199 "adrfam": "ipv4", 00:17:07.199 "trsvcid": "$NVMF_PORT", 00:17:07.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.199 "hdgst": ${hdgst:-false}, 00:17:07.199 "ddgst": ${ddgst:-false} 00:17:07.199 }, 00:17:07.199 "method": "bdev_nvme_attach_controller" 00:17:07.199 } 00:17:07.199 EOF 00:17:07.199 )") 00:17:07.199 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:17:07.199 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:07.199 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:07.199 { 00:17:07.199 "params": { 00:17:07.199 "name": "Nvme$subsystem", 00:17:07.199 "trtype": "$TEST_TRANSPORT", 00:17:07.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.199 "adrfam": "ipv4", 00:17:07.199 "trsvcid": "$NVMF_PORT", 00:17:07.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.199 "hdgst": ${hdgst:-false}, 00:17:07.199 "ddgst": ${ddgst:-false} 00:17:07.199 }, 00:17:07.199 "method": "bdev_nvme_attach_controller" 00:17:07.199 } 00:17:07.199 EOF 00:17:07.199 )") 00:17:07.199 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:17:07.199 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:17:07.199 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:17:07.199 01:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:07.199 "params": { 00:17:07.199 "name": "Nvme1", 00:17:07.199 "trtype": "tcp", 00:17:07.199 "traddr": "10.0.0.2", 00:17:07.199 "adrfam": "ipv4", 00:17:07.199 "trsvcid": "4420", 00:17:07.199 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.199 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:07.199 "hdgst": false, 00:17:07.199 "ddgst": false 00:17:07.199 }, 00:17:07.199 "method": "bdev_nvme_attach_controller" 00:17:07.199 },{ 00:17:07.199 "params": { 00:17:07.199 "name": "Nvme2", 00:17:07.199 "trtype": "tcp", 00:17:07.199 "traddr": "10.0.0.2", 00:17:07.199 "adrfam": "ipv4", 00:17:07.199 "trsvcid": "4420", 00:17:07.199 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:07.199 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:07.199 "hdgst": false, 00:17:07.199 "ddgst": false 00:17:07.199 }, 00:17:07.199 "method": "bdev_nvme_attach_controller" 00:17:07.199 },{ 00:17:07.199 "params": { 00:17:07.199 "name": "Nvme3", 00:17:07.199 "trtype": "tcp", 00:17:07.199 "traddr": "10.0.0.2", 00:17:07.199 "adrfam": "ipv4", 00:17:07.199 "trsvcid": "4420", 00:17:07.199 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:07.199 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:07.199 "hdgst": false, 00:17:07.199 "ddgst": false 00:17:07.199 }, 00:17:07.199 "method": "bdev_nvme_attach_controller" 00:17:07.199 },{ 00:17:07.199 "params": { 00:17:07.199 "name": "Nvme4", 00:17:07.199 "trtype": "tcp", 00:17:07.199 "traddr": "10.0.0.2", 00:17:07.199 "adrfam": "ipv4", 00:17:07.199 "trsvcid": "4420", 00:17:07.199 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:07.199 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:07.199 "hdgst": false, 00:17:07.199 "ddgst": false 00:17:07.199 }, 00:17:07.199 "method": "bdev_nvme_attach_controller" 00:17:07.199 },{ 00:17:07.199 "params": { 00:17:07.199 "name": "Nvme5", 00:17:07.199 "trtype": "tcp", 00:17:07.199 "traddr": "10.0.0.2", 00:17:07.199 "adrfam": "ipv4", 00:17:07.199 "trsvcid": "4420", 00:17:07.199 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:07.199 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:07.199 "hdgst": false, 00:17:07.199 "ddgst": false 00:17:07.199 }, 00:17:07.199 "method": "bdev_nvme_attach_controller" 00:17:07.199 },{ 00:17:07.199 "params": { 00:17:07.199 "name": "Nvme6", 00:17:07.199 "trtype": "tcp", 00:17:07.199 "traddr": "10.0.0.2", 00:17:07.199 "adrfam": "ipv4", 00:17:07.199 "trsvcid": "4420", 00:17:07.199 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:07.199 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:07.199 "hdgst": false, 00:17:07.199 "ddgst": false 00:17:07.199 }, 00:17:07.199 "method": "bdev_nvme_attach_controller" 00:17:07.199 },{ 00:17:07.199 "params": { 00:17:07.199 "name": "Nvme7", 00:17:07.199 "trtype": "tcp", 00:17:07.199 "traddr": "10.0.0.2", 00:17:07.199 "adrfam": "ipv4", 00:17:07.199 "trsvcid": "4420", 00:17:07.199 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:07.199 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:07.199 "hdgst": false, 00:17:07.199 "ddgst": false 00:17:07.199 }, 00:17:07.199 "method": "bdev_nvme_attach_controller" 00:17:07.199 },{ 00:17:07.199 "params": { 00:17:07.199 "name": "Nvme8", 00:17:07.199 "trtype": "tcp", 00:17:07.199 "traddr": "10.0.0.2", 00:17:07.199 "adrfam": "ipv4", 00:17:07.199 "trsvcid": "4420", 00:17:07.199 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:07.199 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:07.199 "hdgst": false, 00:17:07.199 "ddgst": false 00:17:07.199 }, 00:17:07.199 "method": "bdev_nvme_attach_controller" 00:17:07.199 },{ 00:17:07.199 "params": { 00:17:07.199 "name": "Nvme9", 00:17:07.199 "trtype": "tcp", 00:17:07.199 "traddr": "10.0.0.2", 00:17:07.199 "adrfam": "ipv4", 00:17:07.199 "trsvcid": "4420", 00:17:07.199 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:07.199 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:07.199 "hdgst": false, 00:17:07.199 "ddgst": false 00:17:07.199 }, 00:17:07.199 "method": "bdev_nvme_attach_controller" 00:17:07.199 },{ 00:17:07.199 "params": { 00:17:07.199 "name": "Nvme10", 00:17:07.199 "trtype": "tcp", 00:17:07.199 "traddr": "10.0.0.2", 00:17:07.199 "adrfam": "ipv4", 00:17:07.199 "trsvcid": "4420", 00:17:07.199 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:07.199 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:07.199 "hdgst": false, 00:17:07.199 "ddgst": false 00:17:07.199 }, 00:17:07.199 "method": "bdev_nvme_attach_controller" 00:17:07.199 }' 00:17:07.199 [2024-05-15 01:04:19.368681] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:17:07.199 [2024-05-15 01:04:19.368769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1279509 ] 00:17:07.199 EAL: No free 2048 kB hugepages reported on node 1 00:17:07.199 [2024-05-15 01:04:19.442206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.200 [2024-05-15 01:04:19.552390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.097 Running I/O for 10 seconds... 00:17:09.097 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:09.097 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:17:09.097 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:09.097 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.097 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:09.097 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.097 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:17:09.097 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:09.097 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:17:09.097 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:17:09.097 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:17:09.097 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:17:09.097 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:09.097 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:09.097 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:09.097 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.097 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:09.097 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.097 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:17:09.097 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:17:09.097 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:17:09.359 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:17:09.359 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:09.359 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:09.359 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:09.359 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.359 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:09.359 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.360 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:17:09.360 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:17:09.360 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:17:09.624 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:17:09.624 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:09.624 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:09.624 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:09.624 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.624 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:09.624 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.624 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:17:09.624 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:17:09.624 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:17:09.624 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:17:09.624 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:17:09.624 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1279509 00:17:09.624 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 1279509 ']' 00:17:09.624 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 1279509 00:17:09.624 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:17:09.624 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:09.624 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1279509 00:17:09.624 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:09.624 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:09.624 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1279509' 00:17:09.624 killing process with pid 1279509 00:17:09.624 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 1279509 00:17:09.624 01:04:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 1279509 00:17:09.887 Received shutdown signal, test time was about 0.921017 seconds 00:17:09.887 00:17:09.887 Latency(us) 00:17:09.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.887 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:09.887 Verification LBA range: start 0x0 length 0x400 00:17:09.887 Nvme1n1 : 0.88 228.04 14.25 0.00 0.00 273229.72 3835.07 242337.56 00:17:09.887 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:09.887 Verification LBA range: start 0x0 length 0x400 00:17:09.887 Nvme2n1 : 0.91 212.13 13.26 0.00 0.00 291616.24 23495.87 262532.36 00:17:09.887 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:09.887 Verification LBA range: start 0x0 length 0x400 00:17:09.887 Nvme3n1 : 0.91 281.07 17.57 0.00 0.00 215601.87 18738.44 264085.81 00:17:09.887 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:09.887 Verification LBA range: start 0x0 length 0x400 00:17:09.887 Nvme4n1 : 0.87 225.29 14.08 0.00 0.00 260446.27 9175.04 259425.47 00:17:09.887 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:09.887 Verification LBA range: start 0x0 length 0x400 00:17:09.887 Nvme5n1 : 0.89 216.38 13.52 0.00 0.00 267916.83 22233.69 251658.24 00:17:09.887 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:09.887 Verification LBA range: start 0x0 length 0x400 00:17:09.887 Nvme6n1 : 0.91 209.98 13.12 0.00 0.00 270839.15 22524.97 295154.73 00:17:09.887 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:09.887 Verification LBA range: start 0x0 length 0x400 00:17:09.887 Nvme7n1 : 0.87 220.39 13.77 0.00 0.00 250656.49 24369.68 262532.36 00:17:09.887 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:09.887 Verification LBA range: start 0x0 length 0x400 00:17:09.887 Nvme8n1 : 0.89 214.77 13.42 0.00 0.00 252297.92 20680.25 273406.48 00:17:09.887 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:09.887 Verification LBA range: start 0x0 length 0x400 00:17:09.887 Nvme9n1 : 0.90 212.35 13.27 0.00 0.00 249216.82 21068.61 246997.90 00:17:09.887 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:09.887 Verification LBA range: start 0x0 length 0x400 00:17:09.887 Nvme10n1 : 0.92 208.66 13.04 0.00 0.00 249351.08 20680.25 302921.96 00:17:09.887 =================================================================================================================== 00:17:09.887 Total : 2229.05 139.32 0.00 0.00 256821.08 3835.07 302921.96 00:17:10.146 01:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:17:11.076 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1279319 00:17:11.076 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:17:11.076 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:17:11.076 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:11.076 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:11.076 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:17:11.076 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:11.076 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:17:11.076 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:11.076 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:17:11.076 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:11.076 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:11.076 rmmod nvme_tcp 00:17:11.076 rmmod nvme_fabrics 00:17:11.076 rmmod nvme_keyring 00:17:11.076 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:11.076 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:17:11.076 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:17:11.076 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1279319 ']' 00:17:11.076 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1279319 00:17:11.076 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 1279319 ']' 00:17:11.076 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 1279319 00:17:11.076 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:17:11.076 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:11.076 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1279319 00:17:11.076 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:11.076 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:11.076 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1279319' 00:17:11.076 killing process with pid 1279319 00:17:11.076 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 1279319 00:17:11.076 [2024-05-15 01:04:23.405963] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:11.076 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 1279319 00:17:11.642 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:11.642 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:11.642 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:11.642 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:11.642 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:11.642 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.642 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:11.642 01:04:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.177 01:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:14.177 00:17:14.177 real 0m8.365s 00:17:14.177 user 0m25.809s 00:17:14.177 sys 0m1.617s 00:17:14.177 01:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:14.177 01:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:14.177 ************************************ 00:17:14.177 END TEST nvmf_shutdown_tc2 00:17:14.177 ************************************ 00:17:14.177 01:04:25 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:17:14.177 01:04:25 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:14.177 01:04:25 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:14.177 01:04:25 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:17:14.177 ************************************ 00:17:14.177 START TEST nvmf_shutdown_tc3 00:17:14.177 ************************************ 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:14.177 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:14.177 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:14.177 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:14.177 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:14.177 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:14.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:14.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:17:14.178 00:17:14.178 --- 10.0.0.2 ping statistics --- 00:17:14.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.178 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:14.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:14.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:17:14.178 00:17:14.178 --- 10.0.0.1 ping statistics --- 00:17:14.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.178 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1280424 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1280424 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 1280424 ']' 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:14.178 01:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:14.178 [2024-05-15 01:04:26.240771] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:17:14.178 [2024-05-15 01:04:26.240870] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.178 EAL: No free 2048 kB hugepages reported on node 1 00:17:14.178 [2024-05-15 01:04:26.323384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:14.178 [2024-05-15 01:04:26.440544] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.178 [2024-05-15 01:04:26.440615] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.178 [2024-05-15 01:04:26.440631] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.178 [2024-05-15 01:04:26.440645] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.178 [2024-05-15 01:04:26.440657] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.178 [2024-05-15 01:04:26.440754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.178 [2024-05-15 01:04:26.440869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:14.178 [2024-05-15 01:04:26.440966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:14.178 [2024-05-15 01:04:26.440970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.112 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:15.112 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:17:15.112 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:15.112 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:15.112 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:15.112 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.112 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:15.112 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.112 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:15.112 [2024-05-15 01:04:27.199838] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.112 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.112 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:17:15.112 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:17:15.113 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:15.113 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:15.113 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:15.113 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:15.113 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:17:15.113 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:15.113 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:17:15.113 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:15.113 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:17:15.113 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:15.113 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:17:15.113 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:15.113 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:17:15.113 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:15.113 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:17:15.113 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:15.113 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:17:15.113 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:15.113 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:17:15.113 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:15.113 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:17:15.113 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:15.113 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:17:15.113 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:17:15.113 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.113 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:15.113 Malloc1 00:17:15.113 [2024-05-15 01:04:27.274631] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:15.113 [2024-05-15 01:04:27.274957] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:15.113 Malloc2 00:17:15.113 Malloc3 00:17:15.113 Malloc4 00:17:15.113 Malloc5 00:17:15.113 Malloc6 00:17:15.371 Malloc7 00:17:15.371 Malloc8 00:17:15.371 Malloc9 00:17:15.371 Malloc10 00:17:15.371 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.371 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:17:15.371 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:15.371 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:15.371 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1280610 00:17:15.371 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1280610 /var/tmp/bdevperf.sock 00:17:15.371 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 1280610 ']' 00:17:15.371 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:15.371 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:15.371 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:15.371 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:15.371 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:17:15.371 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:15.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:15.371 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:17:15.371 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:15.371 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:15.371 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:15.371 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:15.371 { 00:17:15.371 "params": { 00:17:15.371 "name": "Nvme$subsystem", 00:17:15.371 "trtype": "$TEST_TRANSPORT", 00:17:15.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:15.371 "adrfam": "ipv4", 00:17:15.371 "trsvcid": "$NVMF_PORT", 00:17:15.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:15.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:15.371 "hdgst": ${hdgst:-false}, 00:17:15.371 "ddgst": ${ddgst:-false} 00:17:15.371 }, 00:17:15.371 "method": "bdev_nvme_attach_controller" 00:17:15.371 } 00:17:15.371 EOF 00:17:15.371 )") 00:17:15.371 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:17:15.371 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:15.371 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:15.371 { 00:17:15.371 "params": { 00:17:15.371 "name": "Nvme$subsystem", 00:17:15.371 "trtype": "$TEST_TRANSPORT", 00:17:15.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:15.371 "adrfam": "ipv4", 00:17:15.371 "trsvcid": "$NVMF_PORT", 00:17:15.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:15.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:15.371 "hdgst": ${hdgst:-false}, 00:17:15.371 "ddgst": ${ddgst:-false} 00:17:15.371 }, 00:17:15.371 "method": "bdev_nvme_attach_controller" 00:17:15.371 } 00:17:15.371 EOF 00:17:15.371 )") 00:17:15.371 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:17:15.371 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:15.371 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:15.371 { 00:17:15.371 "params": { 00:17:15.371 "name": "Nvme$subsystem", 00:17:15.371 "trtype": "$TEST_TRANSPORT", 00:17:15.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:15.371 "adrfam": "ipv4", 00:17:15.371 "trsvcid": "$NVMF_PORT", 00:17:15.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:15.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:15.371 "hdgst": ${hdgst:-false}, 00:17:15.371 "ddgst": ${ddgst:-false} 00:17:15.371 }, 00:17:15.371 "method": "bdev_nvme_attach_controller" 00:17:15.371 } 00:17:15.371 EOF 00:17:15.371 )") 00:17:15.371 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:17:15.630 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:15.630 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:15.630 { 00:17:15.630 "params": { 00:17:15.630 "name": "Nvme$subsystem", 00:17:15.630 "trtype": "$TEST_TRANSPORT", 00:17:15.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:15.630 "adrfam": "ipv4", 00:17:15.630 "trsvcid": "$NVMF_PORT", 00:17:15.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:15.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:15.630 "hdgst": ${hdgst:-false}, 00:17:15.630 "ddgst": ${ddgst:-false} 00:17:15.630 }, 00:17:15.630 "method": "bdev_nvme_attach_controller" 00:17:15.630 } 00:17:15.630 EOF 00:17:15.630 )") 00:17:15.630 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:17:15.630 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:15.630 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:15.630 { 00:17:15.630 "params": { 00:17:15.630 "name": "Nvme$subsystem", 00:17:15.630 "trtype": "$TEST_TRANSPORT", 00:17:15.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:15.630 "adrfam": "ipv4", 00:17:15.630 "trsvcid": "$NVMF_PORT", 00:17:15.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:15.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:15.630 "hdgst": ${hdgst:-false}, 00:17:15.630 "ddgst": ${ddgst:-false} 00:17:15.630 }, 00:17:15.630 "method": "bdev_nvme_attach_controller" 00:17:15.630 } 00:17:15.630 EOF 00:17:15.630 )") 00:17:15.630 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:17:15.630 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:15.630 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:15.630 { 00:17:15.630 "params": { 00:17:15.630 "name": "Nvme$subsystem", 00:17:15.630 "trtype": "$TEST_TRANSPORT", 00:17:15.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:15.630 "adrfam": "ipv4", 00:17:15.630 "trsvcid": "$NVMF_PORT", 00:17:15.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:15.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:15.630 "hdgst": ${hdgst:-false}, 00:17:15.630 "ddgst": ${ddgst:-false} 00:17:15.630 }, 00:17:15.630 "method": "bdev_nvme_attach_controller" 00:17:15.630 } 00:17:15.630 EOF 00:17:15.630 )") 00:17:15.630 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:17:15.630 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:15.630 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:15.630 { 00:17:15.630 "params": { 00:17:15.630 "name": "Nvme$subsystem", 00:17:15.630 "trtype": "$TEST_TRANSPORT", 00:17:15.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:15.630 "adrfam": "ipv4", 00:17:15.630 "trsvcid": "$NVMF_PORT", 00:17:15.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:15.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:15.630 "hdgst": ${hdgst:-false}, 00:17:15.630 "ddgst": ${ddgst:-false} 00:17:15.630 }, 00:17:15.630 "method": "bdev_nvme_attach_controller" 00:17:15.630 } 00:17:15.630 EOF 00:17:15.630 )") 00:17:15.630 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:17:15.630 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:15.630 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:15.630 { 00:17:15.630 "params": { 00:17:15.630 "name": "Nvme$subsystem", 00:17:15.630 "trtype": "$TEST_TRANSPORT", 00:17:15.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:15.630 "adrfam": "ipv4", 00:17:15.630 "trsvcid": "$NVMF_PORT", 00:17:15.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:15.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:15.630 "hdgst": ${hdgst:-false}, 00:17:15.630 "ddgst": ${ddgst:-false} 00:17:15.630 }, 00:17:15.630 "method": "bdev_nvme_attach_controller" 00:17:15.630 } 00:17:15.630 EOF 00:17:15.630 )") 00:17:15.630 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:17:15.630 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:15.630 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:15.630 { 00:17:15.630 "params": { 00:17:15.630 "name": "Nvme$subsystem", 00:17:15.630 "trtype": "$TEST_TRANSPORT", 00:17:15.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:15.630 "adrfam": "ipv4", 00:17:15.630 "trsvcid": "$NVMF_PORT", 00:17:15.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:15.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:15.630 "hdgst": ${hdgst:-false}, 00:17:15.630 "ddgst": ${ddgst:-false} 00:17:15.630 }, 00:17:15.630 "method": "bdev_nvme_attach_controller" 00:17:15.630 } 00:17:15.630 EOF 00:17:15.630 )") 00:17:15.630 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:17:15.630 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:15.630 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:15.630 { 00:17:15.630 "params": { 00:17:15.630 "name": "Nvme$subsystem", 00:17:15.630 "trtype": "$TEST_TRANSPORT", 00:17:15.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:15.630 "adrfam": "ipv4", 00:17:15.630 "trsvcid": "$NVMF_PORT", 00:17:15.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:15.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:15.630 "hdgst": ${hdgst:-false}, 00:17:15.630 "ddgst": ${ddgst:-false} 00:17:15.630 }, 00:17:15.630 "method": "bdev_nvme_attach_controller" 00:17:15.630 } 00:17:15.630 EOF 00:17:15.630 )") 00:17:15.630 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:17:15.630 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:17:15.631 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:17:15.631 01:04:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:15.631 "params": { 00:17:15.631 "name": "Nvme1", 00:17:15.631 "trtype": "tcp", 00:17:15.631 "traddr": "10.0.0.2", 00:17:15.631 "adrfam": "ipv4", 00:17:15.631 "trsvcid": "4420", 00:17:15.631 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:15.631 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:15.631 "hdgst": false, 00:17:15.631 "ddgst": false 00:17:15.631 }, 00:17:15.631 "method": "bdev_nvme_attach_controller" 00:17:15.631 },{ 00:17:15.631 "params": { 00:17:15.631 "name": "Nvme2", 00:17:15.631 "trtype": "tcp", 00:17:15.631 "traddr": "10.0.0.2", 00:17:15.631 "adrfam": "ipv4", 00:17:15.631 "trsvcid": "4420", 00:17:15.631 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:15.631 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:15.631 "hdgst": false, 00:17:15.631 "ddgst": false 00:17:15.631 }, 00:17:15.631 "method": "bdev_nvme_attach_controller" 00:17:15.631 },{ 00:17:15.631 "params": { 00:17:15.631 "name": "Nvme3", 00:17:15.631 "trtype": "tcp", 00:17:15.631 "traddr": "10.0.0.2", 00:17:15.631 "adrfam": "ipv4", 00:17:15.631 "trsvcid": "4420", 00:17:15.631 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:15.631 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:15.631 "hdgst": false, 00:17:15.631 "ddgst": false 00:17:15.631 }, 00:17:15.631 "method": "bdev_nvme_attach_controller" 00:17:15.631 },{ 00:17:15.631 "params": { 00:17:15.631 "name": "Nvme4", 00:17:15.631 "trtype": "tcp", 00:17:15.631 "traddr": "10.0.0.2", 00:17:15.631 "adrfam": "ipv4", 00:17:15.631 "trsvcid": "4420", 00:17:15.631 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:15.631 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:15.631 "hdgst": false, 00:17:15.631 "ddgst": false 00:17:15.631 }, 00:17:15.631 "method": "bdev_nvme_attach_controller" 00:17:15.631 },{ 00:17:15.631 "params": { 00:17:15.631 "name": "Nvme5", 00:17:15.631 "trtype": "tcp", 00:17:15.631 "traddr": "10.0.0.2", 00:17:15.631 "adrfam": "ipv4", 00:17:15.631 "trsvcid": "4420", 00:17:15.631 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:15.631 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:15.631 "hdgst": false, 00:17:15.631 "ddgst": false 00:17:15.631 }, 00:17:15.631 "method": "bdev_nvme_attach_controller" 00:17:15.631 },{ 00:17:15.631 "params": { 00:17:15.631 "name": "Nvme6", 00:17:15.631 "trtype": "tcp", 00:17:15.631 "traddr": "10.0.0.2", 00:17:15.631 "adrfam": "ipv4", 00:17:15.631 "trsvcid": "4420", 00:17:15.631 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:15.631 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:15.631 "hdgst": false, 00:17:15.631 "ddgst": false 00:17:15.631 }, 00:17:15.631 "method": "bdev_nvme_attach_controller" 00:17:15.631 },{ 00:17:15.631 "params": { 00:17:15.631 "name": "Nvme7", 00:17:15.631 "trtype": "tcp", 00:17:15.631 "traddr": "10.0.0.2", 00:17:15.631 "adrfam": "ipv4", 00:17:15.631 "trsvcid": "4420", 00:17:15.631 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:15.631 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:15.631 "hdgst": false, 00:17:15.631 "ddgst": false 00:17:15.631 }, 00:17:15.631 "method": "bdev_nvme_attach_controller" 00:17:15.631 },{ 00:17:15.631 "params": { 00:17:15.631 "name": "Nvme8", 00:17:15.631 "trtype": "tcp", 00:17:15.631 "traddr": "10.0.0.2", 00:17:15.631 "adrfam": "ipv4", 00:17:15.631 "trsvcid": "4420", 00:17:15.631 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:15.631 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:15.631 "hdgst": false, 00:17:15.631 "ddgst": false 00:17:15.631 }, 00:17:15.631 "method": "bdev_nvme_attach_controller" 00:17:15.631 },{ 00:17:15.631 "params": { 00:17:15.631 "name": "Nvme9", 00:17:15.631 "trtype": "tcp", 00:17:15.631 "traddr": "10.0.0.2", 00:17:15.631 "adrfam": "ipv4", 00:17:15.631 "trsvcid": "4420", 00:17:15.631 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:15.631 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:15.631 "hdgst": false, 00:17:15.631 "ddgst": false 00:17:15.631 }, 00:17:15.631 "method": "bdev_nvme_attach_controller" 00:17:15.631 },{ 00:17:15.631 "params": { 00:17:15.631 "name": "Nvme10", 00:17:15.631 "trtype": "tcp", 00:17:15.631 "traddr": "10.0.0.2", 00:17:15.631 "adrfam": "ipv4", 00:17:15.631 "trsvcid": "4420", 00:17:15.631 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:15.631 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:15.631 "hdgst": false, 00:17:15.631 "ddgst": false 00:17:15.631 }, 00:17:15.631 "method": "bdev_nvme_attach_controller" 00:17:15.631 }' 00:17:15.631 [2024-05-15 01:04:27.795435] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:17:15.631 [2024-05-15 01:04:27.795522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1280610 ] 00:17:15.631 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.631 [2024-05-15 01:04:27.873557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.631 [2024-05-15 01:04:27.984851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.529 Running I/O for 10 seconds... 00:17:18.463 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:18.463 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:17:18.463 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:18.464 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.464 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:18.464 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.464 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:18.464 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:17:18.464 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:18.464 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:17:18.464 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:17:18.464 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:17:18.464 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:17:18.464 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:18.464 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:18.464 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:18.464 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.464 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:18.464 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.464 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=75 00:17:18.464 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 75 -ge 100 ']' 00:17:18.464 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:17:18.464 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:17:18.464 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:18.464 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:18.464 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:18.464 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.464 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:18.464 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.737 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=138 00:17:18.737 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 138 -ge 100 ']' 00:17:18.737 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:17:18.737 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:17:18.737 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:17:18.737 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1280424 00:17:18.737 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 1280424 ']' 00:17:18.737 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 1280424 00:17:18.737 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:17:18.737 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:18.737 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1280424 00:17:18.737 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:18.737 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:18.737 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1280424' 00:17:18.737 killing process with pid 1280424 00:17:18.737 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 1280424 00:17:18.737 [2024-05-15 01:04:30.892595] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:18.737 01:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 1280424 00:17:18.737 [2024-05-15 01:04:30.894376] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff930 is same with the state(5) to be set 00:17:18.737 [2024-05-15 01:04:30.895429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.737 [2024-05-15 01:04:30.895473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-05-15 01:04:30.895491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.737 [2024-05-15 01:04:30.895507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-05-15 01:04:30.895522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.737 [2024-05-15 01:04:30.895537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-05-15 01:04:30.895552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.737 [2024-05-15 01:04:30.895567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-05-15 01:04:30.895582] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae47c0 is same with the state(5) to be set 00:17:18.737 [2024-05-15 01:04:30.895715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-05-15 01:04:30.895738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-05-15 01:04:30.895763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-05-15 01:04:30.895779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-05-15 01:04:30.895796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.737 [2024-05-15 01:04:30.895811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.737 [2024-05-15 01:04:30.895828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.895863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.895879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.895894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.895910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.895925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.895951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.895966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.895989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.896973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.896999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.897014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.897029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.897044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.897063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.897078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.897094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.738 [2024-05-15 01:04:30.897108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.738 [2024-05-15 01:04:30.897123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.739 [2024-05-15 01:04:30.897138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-05-15 01:04:30.897153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.739 [2024-05-15 01:04:30.897167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-05-15 01:04:30.897183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.739 [2024-05-15 01:04:30.897208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-05-15 01:04:30.897224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.739 [2024-05-15 01:04:30.897238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-05-15 01:04:30.897254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.739 [2024-05-15 01:04:30.897274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-05-15 01:04:30.897290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.739 [2024-05-15 01:04:30.897304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-05-15 01:04:30.897319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.739 [2024-05-15 01:04:30.897333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-05-15 01:04:30.897349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.739 [2024-05-15 01:04:30.897372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-05-15 01:04:30.897388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.739 [2024-05-15 01:04:30.897404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-05-15 01:04:30.897419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.739 [2024-05-15 01:04:30.897434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-05-15 01:04:30.897450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.739 [2024-05-15 01:04:30.897468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-05-15 01:04:30.897484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.739 [2024-05-15 01:04:30.897500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-05-15 01:04:30.897515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.739 [2024-05-15 01:04:30.897540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-05-15 01:04:30.897556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.739 [2024-05-15 01:04:30.897571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-05-15 01:04:30.897587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.739 [2024-05-15 01:04:30.897611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-05-15 01:04:30.897626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.739 [2024-05-15 01:04:30.897640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-05-15 01:04:30.897656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.739 [2024-05-15 01:04:30.897670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-05-15 01:04:30.897685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.739 [2024-05-15 01:04:30.897699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-05-15 01:04:30.897715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.739 [2024-05-15 01:04:30.897730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-05-15 01:04:30.897746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.739 [2024-05-15 01:04:30.897768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-05-15 01:04:30.897783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.739 [2024-05-15 01:04:30.897797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.739 [2024-05-15 01:04:30.897883] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d2dc90 was disconnected and freed. reset controller. 00:17:18.739 [2024-05-15 01:04:30.898218] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898329] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898355] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898368] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898380] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898392] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898404] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898442] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898455] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898480] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898492] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898503] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898516] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898529] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898568] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898593] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898630] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898672] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898685] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898725] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898750] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898775] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.739 [2024-05-15 01:04:30.898824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.740 [2024-05-15 01:04:30.898836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.740 [2024-05-15 01:04:30.898849] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.740 [2024-05-15 01:04:30.898862] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.740 [2024-05-15 01:04:30.898876] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.740 [2024-05-15 01:04:30.898888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.740 [2024-05-15 01:04:30.898900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.740 [2024-05-15 01:04:30.898912] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.740 [2024-05-15 01:04:30.898925] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.740 [2024-05-15 01:04:30.898945] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.740 [2024-05-15 01:04:30.898959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.740 [2024-05-15 01:04:30.898972] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.740 [2024-05-15 01:04:30.898995] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.740 [2024-05-15 01:04:30.899007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.740 [2024-05-15 01:04:30.899019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.740 [2024-05-15 01:04:30.899033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.740 [2024-05-15 01:04:30.899044] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.740 [2024-05-15 01:04:30.899061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.740 [2024-05-15 01:04:30.899073] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.740 [2024-05-15 01:04:30.899086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.740 [2024-05-15 01:04:30.899099] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.740 [2024-05-15 01:04:30.899111] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8022d0 is same with the state(5) to be set 00:17:18.740 [2024-05-15 01:04:30.900341] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:18.740 [2024-05-15 01:04:30.900386] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae47c0 (9): Bad file descriptor 00:17:18.740 [2024-05-15 01:04:30.901958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.740 [2024-05-15 01:04:30.902168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.740 [2024-05-15 01:04:30.902195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae47c0 with addr=10.0.0.2, port=4420 00:17:18.740 [2024-05-15 01:04:30.902212] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae47c0 is same with the state(5) to be set 00:17:18.740 [2024-05-15 01:04:30.902737] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae47c0 (9): Bad file descriptor 00:17:18.740 [2024-05-15 01:04:30.903278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.740 [2024-05-15 01:04:30.903304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.740 [2024-05-15 01:04:30.903327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.740 [2024-05-15 01:04:30.903344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.740 [2024-05-15 01:04:30.903362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.740 [2024-05-15 01:04:30.903377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.740 [2024-05-15 01:04:30.903393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.740 [2024-05-15 01:04:30.903413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.740 [2024-05-15 01:04:30.903429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.740 [2024-05-15 01:04:30.903444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.740 [2024-05-15 01:04:30.903460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.740 [2024-05-15 01:04:30.903476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.740 [2024-05-15 01:04:30.903493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.740 [2024-05-15 01:04:30.903508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.740 [2024-05-15 01:04:30.903525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.740 [2024-05-15 01:04:30.903547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.740 [2024-05-15 01:04:30.903565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.740 [2024-05-15 01:04:30.903580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.740 [2024-05-15 01:04:30.903597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.740 [2024-05-15 01:04:30.903612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.740 [2024-05-15 01:04:30.903629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.740 [2024-05-15 01:04:30.903646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.740 [2024-05-15 01:04:30.903663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.740 [2024-05-15 01:04:30.903678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.740 [2024-05-15 01:04:30.903695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.740 [2024-05-15 01:04:30.903711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.740 [2024-05-15 01:04:30.903729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.740 [2024-05-15 01:04:30.903743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.740 [2024-05-15 01:04:30.903759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.740 [2024-05-15 01:04:30.903774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.740 [2024-05-15 01:04:30.903790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.740 [2024-05-15 01:04:30.903805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.740 [2024-05-15 01:04:30.903821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.740 [2024-05-15 01:04:30.903836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.740 [2024-05-15 01:04:30.903852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.740 [2024-05-15 01:04:30.903866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.740 [2024-05-15 01:04:30.903882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.740 [2024-05-15 01:04:30.903896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.740 [2024-05-15 01:04:30.903912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.740 [2024-05-15 01:04:30.903925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.740 [2024-05-15 01:04:30.903968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.740 [2024-05-15 01:04:30.903984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.740 [2024-05-15 01:04:30.904001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.740 [2024-05-15 01:04:30.904016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.740 [2024-05-15 01:04:30.904032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.740 [2024-05-15 01:04:30.904046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.740 [2024-05-15 01:04:30.904061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.740 [2024-05-15 01:04:30.904076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.740 [2024-05-15 01:04:30.904091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.740 [2024-05-15 01:04:30.904105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.740 [2024-05-15 01:04:30.904121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.740 [2024-05-15 01:04:30.904135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.740 [2024-05-15 01:04:30.904151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.740 [2024-05-15 01:04:30.904165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.740 [2024-05-15 01:04:30.904181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.740 [2024-05-15 01:04:30.904195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.904210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.904225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.904240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.904266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.904282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.904311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.904328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.904342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.904357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.904375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.904391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.904406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.904421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.904435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.904452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.904466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.904481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.904495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.904511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.904525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.904540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.904553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.904569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.904582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.904598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.904612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.904628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.904641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.904657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.904671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.904686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.904700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.904715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.904729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.904751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.904765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.904781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.904795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.904811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.904824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.904840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.904855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.904871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.904884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.904900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.904936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.904956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.904970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.904986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.905000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.905016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.905030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.905045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.905059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.905076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.905091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.905106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.905120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.905136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.905153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.905171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.905185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.905201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.905215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.905258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.905272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.905288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.905301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.905316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.905330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.905345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.741 [2024-05-15 01:04:30.905359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.741 [2024-05-15 01:04:30.905373] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d27190 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.905975] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d27190 was disconnected and freed. reset controller. 00:17:18.742 [2024-05-15 01:04:30.906025] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:18.742 [2024-05-15 01:04:30.906042] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:18.742 [2024-05-15 01:04:30.906058] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:18.742 [2024-05-15 01:04:30.906148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.742 [2024-05-15 01:04:30.906171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.742 [2024-05-15 01:04:30.906186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.742 [2024-05-15 01:04:30.906200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.742 [2024-05-15 01:04:30.906202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with t[2024-05-15 01:04:30.906214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nshe state(5) to be set 00:17:18.742 id:0 cdw10:00000000 cdw11:00000000 00:17:18.742 [2024-05-15 01:04:30.906231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-05-15 01:04:30.906231] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.742 he state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-05-15 01:04:30.906260] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with tid:0 cdw10:00000000 cdw11:00000000 00:17:18.742 he state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-05-15 01:04:30.906276] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.742 he state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906292] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33d30 is same [2024-05-15 01:04:30.906292] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with twith the state(5) to be set 00:17:18.742 he state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906320] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906344] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906357] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906399] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906438] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906451] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906463] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906476] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906489] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906514] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906529] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906554] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906567] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906598] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906610] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906648] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906674] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906701] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906739] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906752] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906809] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906821] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906834] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906861] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906926] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906948] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906961] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.906981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.907006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.907018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.907031] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.907043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.907055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.907068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.907080] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.907093] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.907105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd0 is same with the state(5) to be set 00:17:18.742 [2024-05-15 01:04:30.907919] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:18.742 [2024-05-15 01:04:30.907953] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:17:18.742 [2024-05-15 01:04:30.907978] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b33d30 (9): Bad file descriptor 00:17:18.743 [2024-05-15 01:04:30.908788] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.908823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.908838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.908850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.908862] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.908874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.908887] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.908899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.908911] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.908923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.908952] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.908966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.908978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.908998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909010] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909028] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909041] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909053] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909065] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909077] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909088] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909101] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909113] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909125] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909136] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909160] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909172] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909207] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909219] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909231] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909248] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909260] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909319] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909394] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909442] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909478] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909490] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909502] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.909514] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800270 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.910830] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.910862] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.910877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.910889] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.910902] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.910914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.910926] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.910950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.910963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.910980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.911002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.911017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.911029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.911041] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.911063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.911077] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.911090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.911106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.911118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.911130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.911146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.911158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.911170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.911182] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.911196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.911209] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.911221] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.911234] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.911262] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.911274] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.911289] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.743 [2024-05-15 01:04:30.911303] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911329] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911367] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911379] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911391] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911403] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911480] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911507] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911545] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911582] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911619] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911656] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911680] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911708] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911720] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800710 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.911896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.744 [2024-05-15 01:04:30.912145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.744 [2024-05-15 01:04:30.912173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b33d30 with addr=10.0.0.2, port=4420 00:17:18.744 [2024-05-15 01:04:30.912189] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33d30 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.912307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.744 [2024-05-15 01:04:30.912331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.744 [2024-05-15 01:04:30.912366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.744 [2024-05-15 01:04:30.912383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.744 [2024-05-15 01:04:30.912400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.744 [2024-05-15 01:04:30.912415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.744 [2024-05-15 01:04:30.912431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.744 [2024-05-15 01:04:30.912446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.744 [2024-05-15 01:04:30.912462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.744 [2024-05-15 01:04:30.912477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.744 [2024-05-15 01:04:30.912492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.744 [2024-05-15 01:04:30.912507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.744 [2024-05-15 01:04:30.912523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.744 [2024-05-15 01:04:30.912538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.744 [2024-05-15 01:04:30.912554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.744 [2024-05-15 01:04:30.912569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.744 [2024-05-15 01:04:30.912585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.744 [2024-05-15 01:04:30.912600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.744 [2024-05-15 01:04:30.912616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.744 [2024-05-15 01:04:30.912631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.744 [2024-05-15 01:04:30.912646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.744 [2024-05-15 01:04:30.912661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.744 [2024-05-15 01:04:30.912676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.744 [2024-05-15 01:04:30.912692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.744 [2024-05-15 01:04:30.912782] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ad3e80 was disconnected and freed. reset controller. 00:17:18.744 [2024-05-15 01:04:30.913185] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.913213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.913233] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.913229] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c31300 was disconnected and freed. reset controller. 00:17:18.744 [2024-05-15 01:04:30.913252] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.913266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.913278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.913291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.913310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.913322] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.913335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.913347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.913359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.913371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.913382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.913395] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.913408] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.913420] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.913432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.913444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.913456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.913498] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.913512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.913524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.744 [2024-05-15 01:04:30.913523] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b33d30 (9): Bad file descriptor 00:17:18.744 [2024-05-15 01:04:30.913538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913606] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913619] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913672] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913684] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913696] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913720] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913732] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913752] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913765] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913815] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913827] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913862] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913887] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913912] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913947] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913965] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.913978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.914000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.914012] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.914024] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.914036] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.914047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.914059] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801050 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.914859] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:18.745 [2024-05-15 01:04:30.914888] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:17:18.745 [2024-05-15 01:04:30.914942] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:17:18.745 [2024-05-15 01:04:30.915003] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x163f730 (9): Bad file descriptor 00:17:18.745 [2024-05-15 01:04:30.915028] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0fd10 (9): Bad file descriptor 00:17:18.745 [2024-05-15 01:04:30.915054] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:17:18.745 [2024-05-15 01:04:30.915070] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:17:18.745 [2024-05-15 01:04:30.915084] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:17:18.745 [2024-05-15 01:04:30.915147] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:18.745 [2024-05-15 01:04:30.915478] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:18.745 [2024-05-15 01:04:30.915522] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915549] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915598] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915648] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915703] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915714] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915727] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915739] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915775] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915826] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915862] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915887] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915911] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915945] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.745 [2024-05-15 01:04:30.915958] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915971] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.915988] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.916000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.916016] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.745 [2024-05-15 01:04:30.916028] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.916040] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.916058] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.916071] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.916083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.916095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.916108] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.916119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.916120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.746 [2024-05-15 01:04:30.916133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.916146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with t[2024-05-15 01:04:30.916145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae47c0 withe state(5) to be set 00:17:18.746 h addr=10.0.0.2, port=4420 00:17:18.746 [2024-05-15 01:04:30.916160] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.916163] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae47c0 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.916173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.916186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.916198] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.916210] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.916222] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.916245] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.916257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.916269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.916281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.916293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.916310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.916322] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.916341] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.916354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8014f0 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.746 [2024-05-15 01:04:30.917316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.746 [2024-05-15 01:04:30.917342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b0fd10 with addr=10.0.0.2, port=4420 00:17:18.746 [2024-05-15 01:04:30.917358] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0fd10 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917408] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917448] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917460] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917485] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917498] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917522] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.746 [2024-05-15 01:04:30.917547] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917596] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917622] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917684] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.746 [2024-05-15 01:04:30.917702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x163f730 with addr=10.0.0.2, port=4420 00:17:18.746 [2024-05-15 01:04:30.917716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917727] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163f730 is same [2024-05-15 01:04:30.917729] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with twith the state(5) to be set 00:17:18.746 he state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917748] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae47c0 (9): Bad file descriptor 00:17:18.746 [2024-05-15 01:04:30.917756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917769] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-05-15 01:04:30.917794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with tid:0 cdw10:00000000 cdw11:00000000 00:17:18.746 he state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917810] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.746 [2024-05-15 01:04:30.917823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.746 [2024-05-15 01:04:30.917836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.746 [2024-05-15 01:04:30.917842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.746 [2024-05-15 01:04:30.917850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.747 [2024-05-15 01:04:30.917857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.747 [2024-05-15 01:04:30.917863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.747 [2024-05-15 01:04:30.917871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.747 [2024-05-15 01:04:30.917875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.747 [2024-05-15 01:04:30.917886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-05-15 01:04:30.917888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with tid:0 cdw10:00000000 cdw11:00000000 00:17:18.747 he state(5) to be set 00:17:18.747 [2024-05-15 01:04:30.917902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-05-15 01:04:30.917902] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.747 he state(5) to be set 00:17:18.747 [2024-05-15 01:04:30.917923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with t[2024-05-15 01:04:30.917923] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c87fb0 is same he state(5) to be set 00:17:18.747 with the state(5) to be set 00:17:18.747 [2024-05-15 01:04:30.917946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.747 [2024-05-15 01:04:30.917960] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.747 [2024-05-15 01:04:30.917973] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.747 [2024-05-15 01:04:30.917983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.747 [2024-05-15 01:04:30.917994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.747 [2024-05-15 01:04:30.918004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.747 [2024-05-15 01:04:30.918007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.747 [2024-05-15 01:04:30.918020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.747 [2024-05-15 01:04:30.918025] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.747 [2024-05-15 01:04:30.918034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.747 [2024-05-15 01:04:30.918038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.747 [2024-05-15 01:04:30.918049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-05-15 01:04:30.918051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with tid:0 cdw10:00000000 cdw11:00000000 00:17:18.747 he state(5) to be set 00:17:18.747 [2024-05-15 01:04:30.918065] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with t[2024-05-15 01:04:30.918065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:17:18.747 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.747 [2024-05-15 01:04:30.918079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.747 [2024-05-15 01:04:30.918082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.747 [2024-05-15 01:04:30.918092] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.747 [2024-05-15 01:04:30.918096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.747 [2024-05-15 01:04:30.918105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.747 [2024-05-15 01:04:30.918110] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b086b0 is same with the state(5) to be set 00:17:18.747 [2024-05-15 01:04:30.918118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.747 [2024-05-15 01:04:30.918130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.747 [2024-05-15 01:04:30.918146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.747 [2024-05-15 01:04:30.918159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with t[2024-05-15 01:04:30.918155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nshe state(5) to be set 00:17:18.747 id:0 cdw10:00000000 cdw11:00000000 00:17:18.747 [2024-05-15 01:04:30.918174] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.747 [2024-05-15 01:04:30.918177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.747 [2024-05-15 01:04:30.918186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.747 [2024-05-15 01:04:30.918192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.747 [2024-05-15 01:04:30.918199] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.747 [2024-05-15 01:04:30.918206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.747 [2024-05-15 01:04:30.918212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.747 [2024-05-15 01:04:30.918221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.747 [2024-05-15 01:04:30.918225] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.747 [2024-05-15 01:04:30.918235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.747 [2024-05-15 01:04:30.918238] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801990 is same with the state(5) to be set 00:17:18.747 [2024-05-15 01:04:30.918258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.747 [2024-05-15 01:04:30.918272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.747 [2024-05-15 01:04:30.918285] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89960 is same with the state(5) to be set 00:17:18.747 [2024-05-15 01:04:30.918328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.747 [2024-05-15 01:04:30.918349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.747 [2024-05-15 01:04:30.918365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.747 [2024-05-15 01:04:30.918379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.747 [2024-05-15 01:04:30.918393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.747 [2024-05-15 01:04:30.918407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.747 [2024-05-15 01:04:30.918422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.747 [2024-05-15 01:04:30.918436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.747 [2024-05-15 01:04:30.918449] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caed00 is same with the state(5) to be set 00:17:18.747 [2024-05-15 01:04:30.918500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.747 [2024-05-15 01:04:30.918521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.747 [2024-05-15 01:04:30.918537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.747 [2024-05-15 01:04:30.918551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.747 [2024-05-15 01:04:30.918566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.747 [2024-05-15 01:04:30.918580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.747 [2024-05-15 01:04:30.918594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:18.747 [2024-05-15 01:04:30.918607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.747 [2024-05-15 01:04:30.918621] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f230 is same with the state(5) to be set 00:17:18.747 [2024-05-15 01:04:30.918727] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:18.747 [2024-05-15 01:04:30.919010] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0fd10 (9): Bad file descriptor 00:17:18.747 [2024-05-15 01:04:30.919038] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x163f730 (9): Bad file descriptor 00:17:18.747 [2024-05-15 01:04:30.919055] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:18.747 [2024-05-15 01:04:30.919068] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:18.747 [2024-05-15 01:04:30.919082] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:18.747 [2024-05-15 01:04:30.919086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x801e30 is same with the state(5) to be set 00:17:18.747 [2024-05-15 01:04:30.919189] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:18.747 [2024-05-15 01:04:30.920103] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:18.747 [2024-05-15 01:04:30.920128] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:17:18.747 [2024-05-15 01:04:30.920142] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:17:18.747 [2024-05-15 01:04:30.920155] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:17:18.747 [2024-05-15 01:04:30.920174] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:17:18.747 [2024-05-15 01:04:30.920188] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:17:18.747 [2024-05-15 01:04:30.920201] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:17:18.747 [2024-05-15 01:04:30.920430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.747 [2024-05-15 01:04:30.920455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.747 [2024-05-15 01:04:30.920476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.748 [2024-05-15 01:04:30.920492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.748 [2024-05-15 01:04:30.920514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.748 [2024-05-15 01:04:30.920530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.748 [2024-05-15 01:04:30.920547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.748 [2024-05-15 01:04:30.920562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.748 [2024-05-15 01:04:30.920578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.748 [2024-05-15 01:04:30.920593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.748 [2024-05-15 01:04:30.920609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.748 [2024-05-15 01:04:30.920623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.748 [2024-05-15 01:04:30.920639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.748 [2024-05-15 01:04:30.920655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.748 [2024-05-15 01:04:30.920671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.748 [2024-05-15 01:04:30.920685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.748 [2024-05-15 01:04:30.920701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.748 [2024-05-15 01:04:30.920716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.748 [2024-05-15 01:04:30.920732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.748 [2024-05-15 01:04:30.920746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.748 [2024-05-15 01:04:30.920762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.748 [2024-05-15 01:04:30.920777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.748 [2024-05-15 01:04:30.920792] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d25ca0 is same with the state(5) to be set 00:17:18.748 [2024-05-15 01:04:30.920870] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d25ca0 was disconnected and freed. reset controller. 00:17:18.748 [2024-05-15 01:04:30.920893] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:18.748 [2024-05-15 01:04:30.920908] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:18.748 [2024-05-15 01:04:30.921952] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:17:18.748 [2024-05-15 01:04:30.921990] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:17:18.748 [2024-05-15 01:04:30.922039] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b2ef20 (9): Bad file descriptor 00:17:18.748 [2024-05-15 01:04:30.922132] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:18.748 [2024-05-15 01:04:30.922200] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:18.748 [2024-05-15 01:04:30.922409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.748 [2024-05-15 01:04:30.922808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.748 [2024-05-15 01:04:30.922834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b33d30 with addr=10.0.0.2, port=4420 00:17:18.748 [2024-05-15 01:04:30.922850] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33d30 is same with the state(5) to be set 00:17:18.748 [2024-05-15 01:04:30.923384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.748 [2024-05-15 01:04:30.923744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.748 [2024-05-15 01:04:30.923770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2ef20 with addr=10.0.0.2, port=4420 00:17:18.748 [2024-05-15 01:04:30.923785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2ef20 is same with the state(5) to be set 00:17:18.748 [2024-05-15 01:04:30.923804] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b33d30 (9): Bad file descriptor 00:17:18.748 [2024-05-15 01:04:30.923881] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b2ef20 (9): Bad file descriptor 00:17:18.748 [2024-05-15 01:04:30.923906] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:17:18.748 [2024-05-15 01:04:30.923920] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:17:18.748 [2024-05-15 01:04:30.923941] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:17:18.748 [2024-05-15 01:04:30.924002] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:18.748 [2024-05-15 01:04:30.924022] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:17:18.748 [2024-05-15 01:04:30.924035] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:17:18.748 [2024-05-15 01:04:30.924048] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:17:18.748 [2024-05-15 01:04:30.924095] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:18.748 [2024-05-15 01:04:30.925160] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:18.748 [2024-05-15 01:04:30.925378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.748 [2024-05-15 01:04:30.925534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.748 [2024-05-15 01:04:30.925559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae47c0 with addr=10.0.0.2, port=4420 00:17:18.748 [2024-05-15 01:04:30.925575] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae47c0 is same with the state(5) to be set 00:17:18.748 [2024-05-15 01:04:30.925624] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae47c0 (9): Bad file descriptor 00:17:18.748 [2024-05-15 01:04:30.925673] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:18.748 [2024-05-15 01:04:30.925690] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:18.748 [2024-05-15 01:04:30.925704] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:18.748 [2024-05-15 01:04:30.925752] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:18.748 [2024-05-15 01:04:30.926287] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:17:18.748 [2024-05-15 01:04:30.926353] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:17:18.748 [2024-05-15 01:04:30.926533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.748 [2024-05-15 01:04:30.926691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.748 [2024-05-15 01:04:30.926721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x163f730 with addr=10.0.0.2, port=4420 00:17:18.748 [2024-05-15 01:04:30.926738] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163f730 is same with the state(5) to be set 00:17:18.748 [2024-05-15 01:04:30.926928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.748 [2024-05-15 01:04:30.927098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.748 [2024-05-15 01:04:30.927123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b0fd10 with addr=10.0.0.2, port=4420 00:17:18.748 [2024-05-15 01:04:30.927138] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0fd10 is same with the state(5) to be set 00:17:18.748 [2024-05-15 01:04:30.927156] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x163f730 (9): Bad file descriptor 00:17:18.748 [2024-05-15 01:04:30.927178] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c87fb0 (9): Bad file descriptor 00:17:18.748 [2024-05-15 01:04:30.927210] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b086b0 (9): Bad file descriptor 00:17:18.748 [2024-05-15 01:04:30.927241] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c89960 (9): Bad file descriptor 00:17:18.748 [2024-05-15 01:04:30.927271] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1caed00 (9): Bad file descriptor 00:17:18.748 [2024-05-15 01:04:30.927302] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c9f230 (9): Bad file descriptor 00:17:18.748 [2024-05-15 01:04:30.927385] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0fd10 (9): Bad file descriptor 00:17:18.748 [2024-05-15 01:04:30.927407] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:17:18.748 [2024-05-15 01:04:30.927422] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:17:18.748 [2024-05-15 01:04:30.927436] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:17:18.748 [2024-05-15 01:04:30.927499] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:18.748 [2024-05-15 01:04:30.927519] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:17:18.748 [2024-05-15 01:04:30.927532] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:17:18.748 [2024-05-15 01:04:30.927546] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:17:18.748 [2024-05-15 01:04:30.927593] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:18.748 [2024-05-15 01:04:30.932185] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:17:18.748 [2024-05-15 01:04:30.932459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.748 [2024-05-15 01:04:30.932627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.748 [2024-05-15 01:04:30.932652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b33d30 with addr=10.0.0.2, port=4420 00:17:18.748 [2024-05-15 01:04:30.932668] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33d30 is same with the state(5) to be set 00:17:18.748 [2024-05-15 01:04:30.932721] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b33d30 (9): Bad file descriptor 00:17:18.748 [2024-05-15 01:04:30.932772] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:17:18.748 [2024-05-15 01:04:30.932789] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:17:18.748 [2024-05-15 01:04:30.932804] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:17:18.748 [2024-05-15 01:04:30.932861] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:18.748 [2024-05-15 01:04:30.932984] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:17:18.748 [2024-05-15 01:04:30.933188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.748 [2024-05-15 01:04:30.933362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.749 [2024-05-15 01:04:30.933387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2ef20 with addr=10.0.0.2, port=4420 00:17:18.749 [2024-05-15 01:04:30.933402] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2ef20 is same with the state(5) to be set 00:17:18.749 [2024-05-15 01:04:30.933452] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b2ef20 (9): Bad file descriptor 00:17:18.749 [2024-05-15 01:04:30.933502] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:17:18.749 [2024-05-15 01:04:30.933519] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:17:18.749 [2024-05-15 01:04:30.933533] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:17:18.749 [2024-05-15 01:04:30.933582] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:18.749 [2024-05-15 01:04:30.935304] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:18.749 [2024-05-15 01:04:30.935513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.749 [2024-05-15 01:04:30.935675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.749 [2024-05-15 01:04:30.935700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae47c0 with addr=10.0.0.2, port=4420 00:17:18.749 [2024-05-15 01:04:30.935716] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae47c0 is same with the state(5) to be set 00:17:18.749 [2024-05-15 01:04:30.935766] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae47c0 (9): Bad file descriptor 00:17:18.749 [2024-05-15 01:04:30.935832] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:18.749 [2024-05-15 01:04:30.935852] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:18.749 [2024-05-15 01:04:30.935865] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:18.749 [2024-05-15 01:04:30.935913] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:18.749 [2024-05-15 01:04:30.936425] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:17:18.749 [2024-05-15 01:04:30.936643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.749 [2024-05-15 01:04:30.936803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.749 [2024-05-15 01:04:30.936829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x163f730 with addr=10.0.0.2, port=4420 00:17:18.749 [2024-05-15 01:04:30.936845] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163f730 is same with the state(5) to be set 00:17:18.749 [2024-05-15 01:04:30.936892] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:17:18.749 [2024-05-15 01:04:30.936925] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x163f730 (9): Bad file descriptor 00:17:18.749 [2024-05-15 01:04:30.937187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.749 [2024-05-15 01:04:30.937360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.749 [2024-05-15 01:04:30.937385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b0fd10 with addr=10.0.0.2, port=4420 00:17:18.749 [2024-05-15 01:04:30.937406] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0fd10 is same with the state(5) to be set 00:17:18.749 [2024-05-15 01:04:30.937421] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:17:18.749 [2024-05-15 01:04:30.937434] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:17:18.749 [2024-05-15 01:04:30.937446] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:17:18.749 [2024-05-15 01:04:30.937521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.749 [2024-05-15 01:04:30.937543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.749 [2024-05-15 01:04:30.937570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.749 [2024-05-15 01:04:30.937587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.749 [2024-05-15 01:04:30.937605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.749 [2024-05-15 01:04:30.937620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.749 [2024-05-15 01:04:30.937636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.749 [2024-05-15 01:04:30.937651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.749 [2024-05-15 01:04:30.937667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.749 [2024-05-15 01:04:30.937682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.749 [2024-05-15 01:04:30.937699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.749 [2024-05-15 01:04:30.937713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.749 [2024-05-15 01:04:30.937729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.749 [2024-05-15 01:04:30.937744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.749 [2024-05-15 01:04:30.937760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.749 [2024-05-15 01:04:30.937775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.749 [2024-05-15 01:04:30.937791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.749 [2024-05-15 01:04:30.937805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.749 [2024-05-15 01:04:30.937821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.749 [2024-05-15 01:04:30.937836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.749 [2024-05-15 01:04:30.937852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.749 [2024-05-15 01:04:30.937867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.749 [2024-05-15 01:04:30.937888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.749 [2024-05-15 01:04:30.937903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.749 [2024-05-15 01:04:30.937919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.749 [2024-05-15 01:04:30.937942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.749 [2024-05-15 01:04:30.937960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.749 [2024-05-15 01:04:30.937982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.749 [2024-05-15 01:04:30.937999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.749 [2024-05-15 01:04:30.938014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.749 [2024-05-15 01:04:30.938029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.749 [2024-05-15 01:04:30.938044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.749 [2024-05-15 01:04:30.938060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.749 [2024-05-15 01:04:30.938074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.749 [2024-05-15 01:04:30.938090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.749 [2024-05-15 01:04:30.938104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.749 [2024-05-15 01:04:30.938120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.749 [2024-05-15 01:04:30.938134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.749 [2024-05-15 01:04:30.938151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.749 [2024-05-15 01:04:30.938165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.749 [2024-05-15 01:04:30.938181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.749 [2024-05-15 01:04:30.938195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.749 [2024-05-15 01:04:30.938211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.749 [2024-05-15 01:04:30.938235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.749 [2024-05-15 01:04:30.938251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.749 [2024-05-15 01:04:30.938266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.749 [2024-05-15 01:04:30.938282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.749 [2024-05-15 01:04:30.938301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.749 [2024-05-15 01:04:30.938318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.749 [2024-05-15 01:04:30.938333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.749 [2024-05-15 01:04:30.938349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.749 [2024-05-15 01:04:30.938363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.749 [2024-05-15 01:04:30.938379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.749 [2024-05-15 01:04:30.938393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.749 [2024-05-15 01:04:30.938409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.938423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.938440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.938455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.938470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.938485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.938502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.938516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.938532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.938547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.938563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.938578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.938594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.938609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.938625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.938639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.938655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.938670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.938690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.938705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.938720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.938735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.938752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.938767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.938783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.938797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.938813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.938828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.938843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.938858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.938874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.938888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.938904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.938920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.938943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.938959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.938982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.938996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.939013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.939027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.939043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.939058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.939075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.939093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.939110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.939125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.939141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.939155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.939171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.939186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.939202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.939217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.939234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.939248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.939265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.939280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.939296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.939311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.939326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.939341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.939358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.939372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.939388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.939403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.939419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.939433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.939449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.750 [2024-05-15 01:04:30.939463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.750 [2024-05-15 01:04:30.939484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.939499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.939515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.939530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.939546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.939560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.939575] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d39e70 is same with the state(5) to be set 00:17:18.751 [2024-05-15 01:04:30.940874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.940899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.940920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.940945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.940963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.940987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.941018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.941049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.941081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.941112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.941142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.941173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.941209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.941244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.941274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.941304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.941335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.941365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.941396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.941427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.941456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.941487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.941517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.941548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.941594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.941626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.941656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.941687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.941717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.941747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.941776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.941806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.941836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.941865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.941895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.941925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.941975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.941996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.942011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.942027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.942042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.942058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.942072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.751 [2024-05-15 01:04:30.942089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.751 [2024-05-15 01:04:30.942104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.942120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.942135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.942151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.942166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.942182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.942196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.942212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.942226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.942242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.942257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.942273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.942287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.942302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.942317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.942332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.942347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.942363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.942381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.942400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.942416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.942432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.942447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.942464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.942479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.942495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.942509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.942527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.942542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.942557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.942572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.942589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.942604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.942620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.942635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.942651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.942666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.942682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.942696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.942712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.942727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.942744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.942758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.942778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.942793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.942809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.942824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.942840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.942855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.942872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.942886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.942902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.942916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.942937] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae0220 is same with the state(5) to be set 00:17:18.752 [2024-05-15 01:04:30.944202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.944234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.944254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.944270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.944287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.944302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.944317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.944332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.944348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.944364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.944380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.944395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.944411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.944426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.944442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.944461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.944478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.944492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.944508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.944523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.944539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.944554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.944570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.944584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.944600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.944615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.944630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.752 [2024-05-15 01:04:30.944645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.752 [2024-05-15 01:04:30.944660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.944675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.944691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.944705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.944721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.944736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.944751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.944766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.944782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.944796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.944812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.944827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.944846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.944862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.944878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.944892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.944908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.944922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.944946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.944962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.944978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.944992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.945008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.945023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.945039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.945053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.945069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.945083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.945099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.945114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.945130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.945145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.945161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.945176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.945192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.945206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.945221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.945240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.945256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.945271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.945287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.945301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.945317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.945331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.945347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.945367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.945390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.945405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.945420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.945435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.945451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.945465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.945481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.945495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.945511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.945526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.945541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.945556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.945572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.945586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.945602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.945616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.945636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.945651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.945667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.953944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.954027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.954044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.954061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.954075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.954092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.954107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.954122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.954137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.954153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.954168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.954184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.954200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.954218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.954233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.954249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.753 [2024-05-15 01:04:30.954263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.753 [2024-05-15 01:04:30.954279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.954294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.954310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.954324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.954340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.954365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.954383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.954397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.954413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.954428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.954444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.954458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.954475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.954489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.954505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.954519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.954535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.954549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.954565] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c32800 is same with the state(5) to be set 00:17:18.754 [2024-05-15 01:04:30.955925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.955957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.955983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.955999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.956016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.956030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.956046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.956061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.956077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.956092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.956108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.956128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.956145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.956159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.956175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.956190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.956206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.956220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.956235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.956250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.956266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.956280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.956296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.956310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.956325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.956340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.956356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.956370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.956386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.956400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.956415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.956430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.956446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.956460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.956476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.956490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.956510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.956525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.956541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.956556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.956572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.956587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.956603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.956617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.956633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.956647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.956663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.956678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.956694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.956709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.956726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.956740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.956757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.956771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.956787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.956801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.956817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.956831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.956847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.956861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.956877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.956895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.754 [2024-05-15 01:04:30.956912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.754 [2024-05-15 01:04:30.956926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.956950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.956974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.956989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.957004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.957020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.957035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.957051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.957066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.957083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.957097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.957113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.957128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.957144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.957158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.957175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.957189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.957204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.957219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.957234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.957249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.957265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.957279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.957299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.957313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.957330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.957344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.957360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.957375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.957391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.957405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.957422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.957436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.957451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.957466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.957482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.957496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.957512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.957527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.957542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.957557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.957574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.957588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.957604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.957619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.957634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.957649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.957666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.957683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.957701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.957715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.957732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.957747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.957763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.957777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.957793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.957808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.957823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.957838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.957853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.957867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.957883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.957898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.957914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.957938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.957956] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c33d00 is same with the state(5) to be set 00:17:18.755 [2024-05-15 01:04:30.959184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.959208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.755 [2024-05-15 01:04:30.959228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.755 [2024-05-15 01:04:30.959244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.959261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.959275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.959292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.959311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.959327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.959342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.959359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.959373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.959389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.959404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.959419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.959435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.959451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.959465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.959481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.959496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.959512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.959527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.959542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.959557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.959573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.959588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.959603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.959618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.959634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.959648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.959663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.959678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.959702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.959717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.959733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.959747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.959763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.959778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.959795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.959809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.959825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.959840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.959855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.959871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.959887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.959902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.959918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.959939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.959957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.959979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.959995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.960009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.960025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.960040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.960056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.960071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.960086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.960104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.960121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.960136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.960152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.960167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.960183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.960198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.960213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.960228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.960244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.960259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.960275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.960290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.960307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.960322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.960338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.960353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.960369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.960383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.960399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.960413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.960429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.960443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.960459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.960473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.960489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.960507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.756 [2024-05-15 01:04:30.960524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.756 [2024-05-15 01:04:30.960539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.757 [2024-05-15 01:04:30.960555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.757 [2024-05-15 01:04:30.960569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.757 [2024-05-15 01:04:30.960585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.757 [2024-05-15 01:04:30.960599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.757 [2024-05-15 01:04:30.960615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.757 [2024-05-15 01:04:30.960630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.757 [2024-05-15 01:04:30.960645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.757 [2024-05-15 01:04:30.960660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.757 [2024-05-15 01:04:30.960676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.757 [2024-05-15 01:04:30.960691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.757 [2024-05-15 01:04:30.960706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.757 [2024-05-15 01:04:30.960721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.757 [2024-05-15 01:04:30.960737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.757 [2024-05-15 01:04:30.960752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.757 [2024-05-15 01:04:30.960768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.757 [2024-05-15 01:04:30.960783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.757 [2024-05-15 01:04:30.960799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.757 [2024-05-15 01:04:30.960814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.757 [2024-05-15 01:04:30.960830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.757 [2024-05-15 01:04:30.960844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.757 [2024-05-15 01:04:30.960860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.757 [2024-05-15 01:04:30.960875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.757 [2024-05-15 01:04:30.960895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.757 [2024-05-15 01:04:30.960910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.757 [2024-05-15 01:04:30.960927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.757 [2024-05-15 01:04:30.960949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.757 [2024-05-15 01:04:30.960965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.757 [2024-05-15 01:04:30.960980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.757 [2024-05-15 01:04:30.960996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.757 [2024-05-15 01:04:30.961010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.757 [2024-05-15 01:04:30.961025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.757 [2024-05-15 01:04:30.961040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.757 [2024-05-15 01:04:30.961055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.757 [2024-05-15 01:04:30.961070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.757 [2024-05-15 01:04:30.961086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.757 [2024-05-15 01:04:30.961101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.757 [2024-05-15 01:04:30.961116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:18.757 [2024-05-15 01:04:30.961132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:18.757 [2024-05-15 01:04:30.961146] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25b0bd0 is same with the state(5) to be set 00:17:18.757 [2024-05-15 01:04:30.963168] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:18.757 [2024-05-15 01:04:30.963199] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:17:18.757 [2024-05-15 01:04:30.963222] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:17:18.757 [2024-05-15 01:04:30.963240] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:17:18.757 [2024-05-15 01:04:30.963306] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0fd10 (9): Bad file descriptor 00:17:18.757 [2024-05-15 01:04:30.963403] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:18.757 [2024-05-15 01:04:30.963431] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:18.757 [2024-05-15 01:04:30.963452] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:18.757 [2024-05-15 01:04:30.963804] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:17:18.757 task offset: 27776 on job bdev=Nvme1n1 fails 00:17:18.757 00:17:18.757 Latency(us) 00:17:18.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.757 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:18.757 Job: Nvme1n1 ended in about 1.03 seconds with error 00:17:18.757 Verification LBA range: start 0x0 length 0x400 00:17:18.757 Nvme1n1 : 1.03 186.57 11.66 62.19 0.00 254690.51 6553.60 259425.47 00:17:18.757 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:18.757 Job: Nvme2n1 ended in about 1.07 seconds with error 00:17:18.757 Verification LBA range: start 0x0 length 0x400 00:17:18.757 Nvme2n1 : 1.07 179.45 11.22 59.82 0.00 260298.90 22622.06 236123.78 00:17:18.757 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:18.757 Job: Nvme3n1 ended in about 1.04 seconds with error 00:17:18.757 Verification LBA range: start 0x0 length 0x400 00:17:18.757 Nvme3n1 : 1.04 244.31 15.27 11.50 0.00 238246.22 5437.06 253211.69 00:17:18.757 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:18.757 Job: Nvme4n1 ended in about 1.07 seconds with error 00:17:18.757 Verification LBA range: start 0x0 length 0x400 00:17:18.757 Nvme4n1 : 1.07 178.89 11.18 59.63 0.00 251868.35 20194.80 256318.58 00:17:18.757 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:18.757 Verification LBA range: start 0x0 length 0x400 00:17:18.757 Nvme5n1 : 1.04 184.79 11.55 0.00 0.00 318418.24 21359.88 274959.93 00:17:18.757 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:18.757 Job: Nvme6n1 ended in about 1.08 seconds with error 00:17:18.757 Verification LBA range: start 0x0 length 0x400 00:17:18.757 Nvme6n1 : 1.08 176.98 11.06 58.99 0.00 245772.33 20388.98 256318.58 00:17:18.757 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:18.757 Job: Nvme7n1 ended in about 1.09 seconds with error 00:17:18.757 Verification LBA range: start 0x0 length 0x400 00:17:18.757 Nvme7n1 : 1.09 176.43 11.03 58.81 0.00 242213.74 21942.42 260978.92 00:17:18.757 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:18.757 Job: Nvme8n1 ended in about 1.09 seconds with error 00:17:18.757 Verification LBA range: start 0x0 length 0x400 00:17:18.757 Nvme8n1 : 1.09 177.75 11.11 56.81 0.00 238052.31 20583.16 256318.58 00:17:18.757 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:18.757 Job: Nvme9n1 ended in about 1.05 seconds with error 00:17:18.757 Verification LBA range: start 0x0 length 0x400 00:17:18.757 Nvme9n1 : 1.05 182.65 11.42 10.46 0.00 282424.17 1055.86 290494.39 00:17:18.757 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:18.757 Job: Nvme10n1 ended in about 1.04 seconds with error 00:17:18.757 Verification LBA range: start 0x0 length 0x400 00:17:18.757 Nvme10n1 : 1.04 185.12 11.57 61.71 0.00 216248.04 11747.93 254765.13 00:17:18.757 =================================================================================================================== 00:17:18.757 Total : 1872.96 117.06 439.92 0.00 252494.32 1055.86 290494.39 00:17:18.757 [2024-05-15 01:04:30.992248] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:18.757 [2024-05-15 01:04:30.992337] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:17:18.757 [2024-05-15 01:04:30.992775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.757 [2024-05-15 01:04:30.992998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.757 [2024-05-15 01:04:30.993028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1caed00 with addr=10.0.0.2, port=4420 00:17:18.757 [2024-05-15 01:04:30.993048] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caed00 is same with the state(5) to be set 00:17:18.757 [2024-05-15 01:04:30.993228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.757 [2024-05-15 01:04:30.993416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.757 [2024-05-15 01:04:30.993442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b086b0 with addr=10.0.0.2, port=4420 00:17:18.757 [2024-05-15 01:04:30.993458] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b086b0 is same with the state(5) to be set 00:17:18.757 [2024-05-15 01:04:30.993619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.757 [2024-05-15 01:04:30.993777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.757 [2024-05-15 01:04:30.993803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c89960 with addr=10.0.0.2, port=4420 00:17:18.758 [2024-05-15 01:04:30.993819] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c89960 is same with the state(5) to be set 00:17:18.758 [2024-05-15 01:04:30.993835] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:17:18.758 [2024-05-15 01:04:30.993849] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:17:18.758 [2024-05-15 01:04:30.993865] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:17:18.758 [2024-05-15 01:04:30.995278] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:17:18.758 [2024-05-15 01:04:30.995311] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:17:18.758 [2024-05-15 01:04:30.995329] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:18.758 [2024-05-15 01:04:30.995346] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:17:18.758 [2024-05-15 01:04:30.995363] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:18.758 [2024-05-15 01:04:30.995609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.758 [2024-05-15 01:04:30.995882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.758 [2024-05-15 01:04:30.995909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c87fb0 with addr=10.0.0.2, port=4420 00:17:18.758 [2024-05-15 01:04:30.995925] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c87fb0 is same with the state(5) to be set 00:17:18.758 [2024-05-15 01:04:30.996099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.758 [2024-05-15 01:04:30.996284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.758 [2024-05-15 01:04:30.996310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c9f230 with addr=10.0.0.2, port=4420 00:17:18.758 [2024-05-15 01:04:30.996327] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f230 is same with the state(5) to be set 00:17:18.758 [2024-05-15 01:04:30.996353] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1caed00 (9): Bad file descriptor 00:17:18.758 [2024-05-15 01:04:30.996376] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b086b0 (9): Bad file descriptor 00:17:18.758 [2024-05-15 01:04:30.996395] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c89960 (9): Bad file descriptor 00:17:18.758 [2024-05-15 01:04:30.996461] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:18.758 [2024-05-15 01:04:30.996493] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:18.758 [2024-05-15 01:04:30.996514] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:18.758 [2024-05-15 01:04:30.996769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.758 [2024-05-15 01:04:30.996958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.758 [2024-05-15 01:04:30.996991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b33d30 with addr=10.0.0.2, port=4420 00:17:18.758 [2024-05-15 01:04:30.997009] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b33d30 is same with the state(5) to be set 00:17:18.758 [2024-05-15 01:04:30.997164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.758 [2024-05-15 01:04:30.997357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.758 [2024-05-15 01:04:30.997384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2ef20 with addr=10.0.0.2, port=4420 00:17:18.758 [2024-05-15 01:04:30.997400] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2ef20 is same with the state(5) to be set 00:17:18.758 [2024-05-15 01:04:30.997544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.758 [2024-05-15 01:04:30.997703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.758 [2024-05-15 01:04:30.997730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae47c0 with addr=10.0.0.2, port=4420 00:17:18.758 [2024-05-15 01:04:30.997746] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae47c0 is same with the state(5) to be set 00:17:18.758 [2024-05-15 01:04:30.997938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.758 [2024-05-15 01:04:30.998098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.758 [2024-05-15 01:04:30.998124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x163f730 with addr=10.0.0.2, port=4420 00:17:18.758 [2024-05-15 01:04:30.998140] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163f730 is same with the state(5) to be set 00:17:18.758 [2024-05-15 01:04:30.998159] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c87fb0 (9): Bad file descriptor 00:17:18.758 [2024-05-15 01:04:30.998179] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c9f230 (9): Bad file descriptor 00:17:18.758 [2024-05-15 01:04:30.998195] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:18.758 [2024-05-15 01:04:30.998208] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:17:18.758 [2024-05-15 01:04:30.998222] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:18.758 [2024-05-15 01:04:30.998241] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:17:18.758 [2024-05-15 01:04:30.998255] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:17:18.758 [2024-05-15 01:04:30.998269] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:17:18.758 [2024-05-15 01:04:30.998285] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:17:18.758 [2024-05-15 01:04:30.998298] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:17:18.758 [2024-05-15 01:04:30.998311] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:17:18.758 [2024-05-15 01:04:30.998401] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:17:18.758 [2024-05-15 01:04:30.998427] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:18.758 [2024-05-15 01:04:30.998440] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:18.758 [2024-05-15 01:04:30.998451] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:18.758 [2024-05-15 01:04:30.998474] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b33d30 (9): Bad file descriptor 00:17:18.758 [2024-05-15 01:04:30.998496] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b2ef20 (9): Bad file descriptor 00:17:18.758 [2024-05-15 01:04:30.998520] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae47c0 (9): Bad file descriptor 00:17:18.758 [2024-05-15 01:04:30.998538] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x163f730 (9): Bad file descriptor 00:17:18.758 [2024-05-15 01:04:30.998555] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:17:18.758 [2024-05-15 01:04:30.998568] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:17:18.758 [2024-05-15 01:04:30.998581] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:17:18.758 [2024-05-15 01:04:30.998596] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:17:18.758 [2024-05-15 01:04:30.998611] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:17:18.758 [2024-05-15 01:04:30.998623] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:17:18.758 [2024-05-15 01:04:30.998663] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:18.758 [2024-05-15 01:04:30.998682] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:18.758 [2024-05-15 01:04:30.998855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.758 [2024-05-15 01:04:30.999011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.758 [2024-05-15 01:04:30.999037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b0fd10 with addr=10.0.0.2, port=4420 00:17:18.758 [2024-05-15 01:04:30.999053] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0fd10 is same with the state(5) to be set 00:17:18.758 [2024-05-15 01:04:30.999068] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:17:18.758 [2024-05-15 01:04:30.999081] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:17:18.758 [2024-05-15 01:04:30.999095] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:17:18.758 [2024-05-15 01:04:30.999112] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:17:18.758 [2024-05-15 01:04:30.999127] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:17:18.758 [2024-05-15 01:04:30.999139] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:17:18.758 [2024-05-15 01:04:30.999155] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:18.758 [2024-05-15 01:04:30.999169] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:18.758 [2024-05-15 01:04:30.999182] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:18.758 [2024-05-15 01:04:30.999197] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:17:18.758 [2024-05-15 01:04:30.999212] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:17:18.758 [2024-05-15 01:04:30.999224] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:17:18.758 [2024-05-15 01:04:30.999263] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:18.758 [2024-05-15 01:04:30.999281] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:18.758 [2024-05-15 01:04:30.999293] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:18.758 [2024-05-15 01:04:30.999304] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:18.758 [2024-05-15 01:04:30.999325] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0fd10 (9): Bad file descriptor 00:17:18.758 [2024-05-15 01:04:30.999369] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:17:18.758 [2024-05-15 01:04:30.999388] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:17:18.758 [2024-05-15 01:04:30.999402] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:17:18.758 [2024-05-15 01:04:30.999453] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:19.344 01:04:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:17:19.344 01:04:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:17:20.277 01:04:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1280610 00:17:20.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1280610) - No such process 00:17:20.277 01:04:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:17:20.277 01:04:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:17:20.277 01:04:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:17:20.277 01:04:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:20.277 01:04:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:20.277 01:04:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:17:20.277 01:04:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:20.277 01:04:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:17:20.277 01:04:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:20.277 01:04:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:17:20.277 01:04:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:20.277 01:04:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:20.277 rmmod nvme_tcp 00:17:20.277 rmmod nvme_fabrics 00:17:20.277 rmmod nvme_keyring 00:17:20.277 01:04:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:20.277 01:04:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:17:20.277 01:04:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:17:20.277 01:04:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:20.277 01:04:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:20.277 01:04:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:20.277 01:04:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:20.277 01:04:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:20.277 01:04:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:20.277 01:04:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.277 01:04:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.277 01:04:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.810 01:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:22.810 00:17:22.810 real 0m8.614s 00:17:22.810 user 0m23.188s 00:17:22.810 sys 0m1.666s 00:17:22.810 01:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:22.810 01:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:22.810 ************************************ 00:17:22.810 END TEST nvmf_shutdown_tc3 00:17:22.810 ************************************ 00:17:22.810 01:04:34 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:17:22.810 00:17:22.810 real 0m29.410s 00:17:22.810 user 1m22.961s 00:17:22.810 sys 0m7.033s 00:17:22.810 01:04:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:22.810 01:04:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:17:22.810 ************************************ 00:17:22.810 END TEST nvmf_shutdown 00:17:22.810 ************************************ 00:17:22.810 01:04:34 nvmf_tcp -- nvmf/nvmf.sh@84 -- # timing_exit target 00:17:22.810 01:04:34 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:22.810 01:04:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:22.810 01:04:34 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_enter host 00:17:22.810 01:04:34 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:22.810 01:04:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:22.810 01:04:34 nvmf_tcp -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:17:22.810 01:04:34 nvmf_tcp -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:17:22.810 01:04:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:22.810 01:04:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:22.810 01:04:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:22.810 ************************************ 00:17:22.810 START TEST nvmf_multicontroller 00:17:22.810 ************************************ 00:17:22.810 01:04:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:17:22.810 * Looking for test storage... 00:17:22.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:22.810 01:04:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:22.810 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:17:22.810 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.810 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.810 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.810 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.810 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:22.810 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:22.810 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.810 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:22.810 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:22.810 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:22.810 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:22.810 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:22.810 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:22.810 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:22.810 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:22.810 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:22.810 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:22.810 01:04:34 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.810 01:04:34 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.810 01:04:34 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.810 01:04:34 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.810 01:04:34 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.810 01:04:34 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.810 01:04:34 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:17:22.811 01:04:34 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.811 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:17:22.811 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:22.811 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:22.811 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:22.811 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:22.811 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:22.811 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:22.811 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:22.811 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:22.811 01:04:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:22.811 01:04:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:22.811 01:04:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:17:22.811 01:04:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:17:22.811 01:04:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:22.811 01:04:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:17:22.811 01:04:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:17:22.811 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:22.811 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:22.811 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:22.811 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:22.811 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:22.811 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.811 01:04:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:22.811 01:04:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.811 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:22.811 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:22.811 01:04:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:17:22.811 01:04:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:25.340 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:25.340 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:25.340 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:25.340 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:25.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:25.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:17:25.340 00:17:25.340 --- 10.0.0.2 ping statistics --- 00:17:25.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.340 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:25.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:25.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:17:25.340 00:17:25.340 --- 10.0.0.1 ping statistics --- 00:17:25.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.340 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:25.340 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:25.341 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:25.341 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1283552 00:17:25.341 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:25.341 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1283552 00:17:25.341 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 1283552 ']' 00:17:25.341 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.341 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:25.341 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.341 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:25.341 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:25.341 [2024-05-15 01:04:37.359730] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:17:25.341 [2024-05-15 01:04:37.359808] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.341 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.341 [2024-05-15 01:04:37.436262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:25.341 [2024-05-15 01:04:37.546533] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:25.341 [2024-05-15 01:04:37.546593] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:25.341 [2024-05-15 01:04:37.546606] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:25.341 [2024-05-15 01:04:37.546617] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:25.341 [2024-05-15 01:04:37.546626] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:25.341 [2024-05-15 01:04:37.546712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:25.341 [2024-05-15 01:04:37.546775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:25.341 [2024-05-15 01:04:37.546778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.341 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:25.341 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:17:25.341 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:25.341 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:25.341 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:25.341 01:04:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.341 01:04:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:25.341 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.341 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:25.341 [2024-05-15 01:04:37.698180] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:25.341 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.341 01:04:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:25.341 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.341 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:25.600 Malloc0 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:25.600 [2024-05-15 01:04:37.766330] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:25.600 [2024-05-15 01:04:37.766600] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:25.600 [2024-05-15 01:04:37.774450] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:25.600 Malloc1 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1283574 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1283574 /var/tmp/bdevperf.sock 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 1283574 ']' 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:25.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:25.600 01:04:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:25.858 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:25.858 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:17:25.858 01:04:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:17:25.858 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.858 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:25.858 NVMe0n1 00:17:25.858 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.858 01:04:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:25.858 01:04:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:17:25.858 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.858 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:26.116 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.116 1 00:17:26.116 01:04:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:26.116 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:26.116 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:26.116 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:26.116 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:26.116 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:26.116 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:26.116 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:26.116 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.116 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:26.116 request: 00:17:26.116 { 00:17:26.116 "name": "NVMe0", 00:17:26.116 "trtype": "tcp", 00:17:26.116 "traddr": "10.0.0.2", 00:17:26.116 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:17:26.116 "hostaddr": "10.0.0.2", 00:17:26.116 "hostsvcid": "60000", 00:17:26.116 "adrfam": "ipv4", 00:17:26.116 "trsvcid": "4420", 00:17:26.116 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:26.116 "method": "bdev_nvme_attach_controller", 00:17:26.116 "req_id": 1 00:17:26.116 } 00:17:26.116 Got JSON-RPC error response 00:17:26.116 response: 00:17:26.116 { 00:17:26.116 "code": -114, 00:17:26.116 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:17:26.116 } 00:17:26.116 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:26.116 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:26.116 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:26.117 request: 00:17:26.117 { 00:17:26.117 "name": "NVMe0", 00:17:26.117 "trtype": "tcp", 00:17:26.117 "traddr": "10.0.0.2", 00:17:26.117 "hostaddr": "10.0.0.2", 00:17:26.117 "hostsvcid": "60000", 00:17:26.117 "adrfam": "ipv4", 00:17:26.117 "trsvcid": "4420", 00:17:26.117 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:26.117 "method": "bdev_nvme_attach_controller", 00:17:26.117 "req_id": 1 00:17:26.117 } 00:17:26.117 Got JSON-RPC error response 00:17:26.117 response: 00:17:26.117 { 00:17:26.117 "code": -114, 00:17:26.117 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:17:26.117 } 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:26.117 request: 00:17:26.117 { 00:17:26.117 "name": "NVMe0", 00:17:26.117 "trtype": "tcp", 00:17:26.117 "traddr": "10.0.0.2", 00:17:26.117 "hostaddr": "10.0.0.2", 00:17:26.117 "hostsvcid": "60000", 00:17:26.117 "adrfam": "ipv4", 00:17:26.117 "trsvcid": "4420", 00:17:26.117 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:26.117 "multipath": "disable", 00:17:26.117 "method": "bdev_nvme_attach_controller", 00:17:26.117 "req_id": 1 00:17:26.117 } 00:17:26.117 Got JSON-RPC error response 00:17:26.117 response: 00:17:26.117 { 00:17:26.117 "code": -114, 00:17:26.117 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:17:26.117 } 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:26.117 request: 00:17:26.117 { 00:17:26.117 "name": "NVMe0", 00:17:26.117 "trtype": "tcp", 00:17:26.117 "traddr": "10.0.0.2", 00:17:26.117 "hostaddr": "10.0.0.2", 00:17:26.117 "hostsvcid": "60000", 00:17:26.117 "adrfam": "ipv4", 00:17:26.117 "trsvcid": "4420", 00:17:26.117 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:26.117 "multipath": "failover", 00:17:26.117 "method": "bdev_nvme_attach_controller", 00:17:26.117 "req_id": 1 00:17:26.117 } 00:17:26.117 Got JSON-RPC error response 00:17:26.117 response: 00:17:26.117 { 00:17:26.117 "code": -114, 00:17:26.117 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:17:26.117 } 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.117 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:26.375 00:17:26.375 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.375 01:04:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:26.375 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.375 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:26.375 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.375 01:04:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:17:26.375 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.375 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:26.633 00:17:26.633 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.633 01:04:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:26.633 01:04:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:17:26.633 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.633 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:26.633 01:04:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.633 01:04:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:17:26.633 01:04:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:27.566 0 00:17:27.566 01:04:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:17:27.566 01:04:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.566 01:04:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:27.566 01:04:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.566 01:04:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1283574 00:17:27.566 01:04:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 1283574 ']' 00:17:27.566 01:04:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 1283574 00:17:27.566 01:04:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:17:27.566 01:04:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:27.566 01:04:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1283574 00:17:27.566 01:04:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:27.566 01:04:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:27.566 01:04:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1283574' 00:17:27.566 killing process with pid 1283574 00:17:27.566 01:04:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 1283574 00:17:27.566 01:04:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 1283574 00:17:27.823 01:04:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:27.823 01:04:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.823 01:04:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:27.823 01:04:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.823 01:04:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:27.823 01:04:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.823 01:04:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:27.823 01:04:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.823 01:04:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:17:27.823 01:04:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:17:27.823 01:04:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:17:27.823 01:04:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:17:27.823 01:04:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:17:27.823 01:04:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:17:28.080 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:17:28.080 [2024-05-15 01:04:37.878717] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:17:28.080 [2024-05-15 01:04:37.878814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1283574 ] 00:17:28.080 EAL: No free 2048 kB hugepages reported on node 1 00:17:28.080 [2024-05-15 01:04:37.948704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.080 [2024-05-15 01:04:38.060899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.080 [2024-05-15 01:04:38.769979] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name 947b260f-3b4a-4e58-b89c-b17e5e334174 already exists 00:17:28.080 [2024-05-15 01:04:38.770020] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:947b260f-3b4a-4e58-b89c-b17e5e334174 alias for bdev NVMe1n1 00:17:28.080 [2024-05-15 01:04:38.770048] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:17:28.080 Running I/O for 1 seconds... 00:17:28.080 00:17:28.080 Latency(us) 00:17:28.080 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.080 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:17:28.080 NVMe0n1 : 1.01 16941.95 66.18 0.00 0.00 7522.08 2038.90 9320.68 00:17:28.080 =================================================================================================================== 00:17:28.080 Total : 16941.95 66.18 0.00 0.00 7522.08 2038.90 9320.68 00:17:28.080 Received shutdown signal, test time was about 1.000000 seconds 00:17:28.080 00:17:28.080 Latency(us) 00:17:28.080 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.081 =================================================================================================================== 00:17:28.081 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:28.081 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:17:28.081 01:04:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:17:28.081 01:04:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:17:28.081 01:04:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:17:28.081 01:04:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:28.081 01:04:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:17:28.081 01:04:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:28.081 01:04:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:17:28.081 01:04:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:28.081 01:04:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:28.081 rmmod nvme_tcp 00:17:28.081 rmmod nvme_fabrics 00:17:28.081 rmmod nvme_keyring 00:17:28.081 01:04:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:28.081 01:04:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:17:28.081 01:04:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:17:28.081 01:04:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1283552 ']' 00:17:28.081 01:04:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1283552 00:17:28.081 01:04:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 1283552 ']' 00:17:28.081 01:04:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 1283552 00:17:28.081 01:04:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:17:28.081 01:04:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:28.081 01:04:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1283552 00:17:28.081 01:04:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:28.081 01:04:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:28.081 01:04:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1283552' 00:17:28.081 killing process with pid 1283552 00:17:28.081 01:04:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 1283552 00:17:28.081 [2024-05-15 01:04:40.294018] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:28.081 01:04:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 1283552 00:17:28.339 01:04:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:28.339 01:04:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:28.339 01:04:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:28.339 01:04:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:28.339 01:04:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:28.339 01:04:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.339 01:04:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.339 01:04:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.871 01:04:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:30.871 00:17:30.871 real 0m7.935s 00:17:30.871 user 0m12.151s 00:17:30.871 sys 0m2.599s 00:17:30.871 01:04:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:30.871 01:04:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:30.871 ************************************ 00:17:30.871 END TEST nvmf_multicontroller 00:17:30.871 ************************************ 00:17:30.871 01:04:42 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:30.872 01:04:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:30.872 01:04:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:30.872 01:04:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:30.872 ************************************ 00:17:30.872 START TEST nvmf_aer 00:17:30.872 ************************************ 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:30.872 * Looking for test storage... 00:17:30.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:17:30.872 01:04:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:32.774 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:32.774 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:32.774 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:32.774 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:32.774 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:32.775 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:32.775 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:32.775 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:32.775 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:32.775 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:32.775 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:32.775 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:32.775 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:32.775 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:32.775 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:32.775 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:32.775 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:32.775 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:33.033 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:33.033 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:33.033 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:33.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:33.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:17:33.033 00:17:33.033 --- 10.0.0.2 ping statistics --- 00:17:33.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.033 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:17:33.033 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:33.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:33.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:17:33.033 00:17:33.033 --- 10.0.0.1 ping statistics --- 00:17:33.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.033 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:17:33.033 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:33.033 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:17:33.033 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:33.033 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:33.033 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:33.033 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:33.033 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:33.033 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:33.033 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:33.033 01:04:45 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:17:33.033 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:33.033 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:33.033 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:33.033 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1286197 00:17:33.033 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:33.033 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1286197 00:17:33.033 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 1286197 ']' 00:17:33.033 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.033 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:33.033 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.033 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:33.033 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:33.033 [2024-05-15 01:04:45.290294] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:17:33.033 [2024-05-15 01:04:45.290369] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.033 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.033 [2024-05-15 01:04:45.365578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:33.292 [2024-05-15 01:04:45.473707] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.292 [2024-05-15 01:04:45.473767] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.292 [2024-05-15 01:04:45.473780] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:33.292 [2024-05-15 01:04:45.473790] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:33.292 [2024-05-15 01:04:45.473800] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.292 [2024-05-15 01:04:45.473889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.292 [2024-05-15 01:04:45.473957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:33.292 [2024-05-15 01:04:45.474020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:33.292 [2024-05-15 01:04:45.474023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.292 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:33.292 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:17:33.292 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:33.292 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:33.292 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:33.292 01:04:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.292 01:04:45 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:33.292 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.292 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:33.292 [2024-05-15 01:04:45.631880] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:33.292 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.292 01:04:45 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:17:33.292 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.292 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:33.292 Malloc0 00:17:33.292 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.292 01:04:45 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:17:33.292 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.292 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:33.292 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.292 01:04:45 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:33.292 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.292 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:33.292 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.292 01:04:45 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:33.292 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.292 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:33.551 [2024-05-15 01:04:45.685172] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:33.551 [2024-05-15 01:04:45.685490] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:33.551 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.551 01:04:45 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:17:33.551 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.551 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:33.551 [ 00:17:33.551 { 00:17:33.551 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:33.551 "subtype": "Discovery", 00:17:33.551 "listen_addresses": [], 00:17:33.551 "allow_any_host": true, 00:17:33.551 "hosts": [] 00:17:33.551 }, 00:17:33.551 { 00:17:33.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:33.551 "subtype": "NVMe", 00:17:33.551 "listen_addresses": [ 00:17:33.551 { 00:17:33.551 "trtype": "TCP", 00:17:33.551 "adrfam": "IPv4", 00:17:33.551 "traddr": "10.0.0.2", 00:17:33.551 "trsvcid": "4420" 00:17:33.551 } 00:17:33.551 ], 00:17:33.551 "allow_any_host": true, 00:17:33.551 "hosts": [], 00:17:33.551 "serial_number": "SPDK00000000000001", 00:17:33.551 "model_number": "SPDK bdev Controller", 00:17:33.551 "max_namespaces": 2, 00:17:33.551 "min_cntlid": 1, 00:17:33.551 "max_cntlid": 65519, 00:17:33.551 "namespaces": [ 00:17:33.551 { 00:17:33.551 "nsid": 1, 00:17:33.551 "bdev_name": "Malloc0", 00:17:33.551 "name": "Malloc0", 00:17:33.551 "nguid": "B6C896C77957440FA12A18E608E506E6", 00:17:33.551 "uuid": "b6c896c7-7957-440f-a12a-18e608e506e6" 00:17:33.551 } 00:17:33.551 ] 00:17:33.551 } 00:17:33.551 ] 00:17:33.551 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.551 01:04:45 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:33.551 01:04:45 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:17:33.551 01:04:45 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1286228 00:17:33.551 01:04:45 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:17:33.551 01:04:45 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:17:33.551 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:17:33.551 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:33.551 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:17:33.551 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:17:33.551 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:17:33.551 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.551 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:33.551 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:17:33.551 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:17:33.551 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:17:33.551 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:33.551 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:33.551 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:17:33.551 01:04:45 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:17:33.551 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.551 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:33.810 Malloc1 00:17:33.810 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.810 01:04:45 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:17:33.810 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.810 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:33.810 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.810 01:04:45 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:17:33.810 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.810 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:33.810 Asynchronous Event Request test 00:17:33.810 Attaching to 10.0.0.2 00:17:33.810 Attached to 10.0.0.2 00:17:33.810 Registering asynchronous event callbacks... 00:17:33.810 Starting namespace attribute notice tests for all controllers... 00:17:33.810 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:33.810 aer_cb - Changed Namespace 00:17:33.810 Cleaning up... 00:17:33.810 [ 00:17:33.810 { 00:17:33.810 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:33.810 "subtype": "Discovery", 00:17:33.810 "listen_addresses": [], 00:17:33.810 "allow_any_host": true, 00:17:33.810 "hosts": [] 00:17:33.810 }, 00:17:33.810 { 00:17:33.810 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:33.810 "subtype": "NVMe", 00:17:33.810 "listen_addresses": [ 00:17:33.810 { 00:17:33.810 "trtype": "TCP", 00:17:33.810 "adrfam": "IPv4", 00:17:33.810 "traddr": "10.0.0.2", 00:17:33.810 "trsvcid": "4420" 00:17:33.810 } 00:17:33.810 ], 00:17:33.810 "allow_any_host": true, 00:17:33.810 "hosts": [], 00:17:33.810 "serial_number": "SPDK00000000000001", 00:17:33.810 "model_number": "SPDK bdev Controller", 00:17:33.810 "max_namespaces": 2, 00:17:33.810 "min_cntlid": 1, 00:17:33.810 "max_cntlid": 65519, 00:17:33.810 "namespaces": [ 00:17:33.810 { 00:17:33.810 "nsid": 1, 00:17:33.810 "bdev_name": "Malloc0", 00:17:33.810 "name": "Malloc0", 00:17:33.810 "nguid": "B6C896C77957440FA12A18E608E506E6", 00:17:33.810 "uuid": "b6c896c7-7957-440f-a12a-18e608e506e6" 00:17:33.810 }, 00:17:33.810 { 00:17:33.810 "nsid": 2, 00:17:33.810 "bdev_name": "Malloc1", 00:17:33.810 "name": "Malloc1", 00:17:33.810 "nguid": "D66F10A0E2324152BA616C69E87FD7AF", 00:17:33.810 "uuid": "d66f10a0-e232-4152-ba61-6c69e87fd7af" 00:17:33.810 } 00:17:33.810 ] 00:17:33.810 } 00:17:33.810 ] 00:17:33.810 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.810 01:04:45 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1286228 00:17:33.810 01:04:45 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:33.810 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.810 01:04:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:33.810 rmmod nvme_tcp 00:17:33.810 rmmod nvme_fabrics 00:17:33.810 rmmod nvme_keyring 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1286197 ']' 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1286197 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 1286197 ']' 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 1286197 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1286197 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1286197' 00:17:33.810 killing process with pid 1286197 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 1286197 00:17:33.810 [2024-05-15 01:04:46.143039] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:33.810 01:04:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 1286197 00:17:34.070 01:04:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:34.070 01:04:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:34.070 01:04:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:34.070 01:04:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:34.070 01:04:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:34.070 01:04:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.070 01:04:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:34.070 01:04:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.604 01:04:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:36.604 00:17:36.604 real 0m5.749s 00:17:36.604 user 0m4.355s 00:17:36.604 sys 0m2.191s 00:17:36.604 01:04:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:36.604 01:04:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:36.604 ************************************ 00:17:36.604 END TEST nvmf_aer 00:17:36.604 ************************************ 00:17:36.604 01:04:48 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:36.604 01:04:48 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:36.604 01:04:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:36.604 01:04:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:36.604 ************************************ 00:17:36.604 START TEST nvmf_async_init 00:17:36.604 ************************************ 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:36.604 * Looking for test storage... 00:17:36.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=09082f83316c4596aeb84f9b5afa1f71 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:17:36.604 01:04:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:39.133 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:39.133 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:17:39.133 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:39.133 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:39.133 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:39.133 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:39.133 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:39.133 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:17:39.133 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:39.133 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:17:39.133 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:39.134 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:39.134 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:39.134 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:39.134 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:39.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:39.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:17:39.134 00:17:39.134 --- 10.0.0.2 ping statistics --- 00:17:39.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.134 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:39.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:39.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:17:39.134 00:17:39.134 --- 10.0.0.1 ping statistics --- 00:17:39.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.134 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:39.134 01:04:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:39.135 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1288577 00:17:39.135 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:39.135 01:04:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1288577 00:17:39.135 01:04:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 1288577 ']' 00:17:39.135 01:04:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.135 01:04:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:39.135 01:04:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.135 01:04:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:39.135 01:04:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:39.135 [2024-05-15 01:04:51.287413] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:17:39.135 [2024-05-15 01:04:51.287492] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.135 EAL: No free 2048 kB hugepages reported on node 1 00:17:39.135 [2024-05-15 01:04:51.363018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.135 [2024-05-15 01:04:51.473827] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.135 [2024-05-15 01:04:51.473884] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.135 [2024-05-15 01:04:51.473897] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:39.135 [2024-05-15 01:04:51.473923] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:39.135 [2024-05-15 01:04:51.473938] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.135 [2024-05-15 01:04:51.473987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:40.068 [2024-05-15 01:04:52.251090] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:40.068 null0 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 09082f83316c4596aeb84f9b5afa1f71 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:40.068 [2024-05-15 01:04:52.291134] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:40.068 [2024-05-15 01:04:52.291410] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.068 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:40.325 nvme0n1 00:17:40.325 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.325 01:04:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:40.325 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.325 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:40.325 [ 00:17:40.325 { 00:17:40.325 "name": "nvme0n1", 00:17:40.325 "aliases": [ 00:17:40.325 "09082f83-316c-4596-aeb8-4f9b5afa1f71" 00:17:40.325 ], 00:17:40.325 "product_name": "NVMe disk", 00:17:40.325 "block_size": 512, 00:17:40.325 "num_blocks": 2097152, 00:17:40.325 "uuid": "09082f83-316c-4596-aeb8-4f9b5afa1f71", 00:17:40.325 "assigned_rate_limits": { 00:17:40.325 "rw_ios_per_sec": 0, 00:17:40.325 "rw_mbytes_per_sec": 0, 00:17:40.325 "r_mbytes_per_sec": 0, 00:17:40.325 "w_mbytes_per_sec": 0 00:17:40.325 }, 00:17:40.325 "claimed": false, 00:17:40.325 "zoned": false, 00:17:40.325 "supported_io_types": { 00:17:40.325 "read": true, 00:17:40.325 "write": true, 00:17:40.325 "unmap": false, 00:17:40.325 "write_zeroes": true, 00:17:40.325 "flush": true, 00:17:40.325 "reset": true, 00:17:40.325 "compare": true, 00:17:40.325 "compare_and_write": true, 00:17:40.325 "abort": true, 00:17:40.325 "nvme_admin": true, 00:17:40.325 "nvme_io": true 00:17:40.325 }, 00:17:40.325 "memory_domains": [ 00:17:40.325 { 00:17:40.325 "dma_device_id": "system", 00:17:40.325 "dma_device_type": 1 00:17:40.325 } 00:17:40.325 ], 00:17:40.325 "driver_specific": { 00:17:40.325 "nvme": [ 00:17:40.325 { 00:17:40.325 "trid": { 00:17:40.325 "trtype": "TCP", 00:17:40.325 "adrfam": "IPv4", 00:17:40.325 "traddr": "10.0.0.2", 00:17:40.325 "trsvcid": "4420", 00:17:40.325 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:40.325 }, 00:17:40.325 "ctrlr_data": { 00:17:40.325 "cntlid": 1, 00:17:40.325 "vendor_id": "0x8086", 00:17:40.325 "model_number": "SPDK bdev Controller", 00:17:40.325 "serial_number": "00000000000000000000", 00:17:40.325 "firmware_revision": "24.05", 00:17:40.325 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:40.325 "oacs": { 00:17:40.325 "security": 0, 00:17:40.325 "format": 0, 00:17:40.325 "firmware": 0, 00:17:40.325 "ns_manage": 0 00:17:40.325 }, 00:17:40.325 "multi_ctrlr": true, 00:17:40.325 "ana_reporting": false 00:17:40.325 }, 00:17:40.325 "vs": { 00:17:40.325 "nvme_version": "1.3" 00:17:40.325 }, 00:17:40.325 "ns_data": { 00:17:40.325 "id": 1, 00:17:40.325 "can_share": true 00:17:40.325 } 00:17:40.325 } 00:17:40.325 ], 00:17:40.325 "mp_policy": "active_passive" 00:17:40.325 } 00:17:40.325 } 00:17:40.325 ] 00:17:40.325 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.325 01:04:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:17:40.325 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.325 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:40.325 [2024-05-15 01:04:52.543978] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:40.325 [2024-05-15 01:04:52.544068] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1059b20 (9): Bad file descriptor 00:17:40.325 [2024-05-15 01:04:52.686087] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:40.325 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.325 01:04:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:40.325 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.325 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:40.325 [ 00:17:40.325 { 00:17:40.325 "name": "nvme0n1", 00:17:40.325 "aliases": [ 00:17:40.325 "09082f83-316c-4596-aeb8-4f9b5afa1f71" 00:17:40.325 ], 00:17:40.325 "product_name": "NVMe disk", 00:17:40.325 "block_size": 512, 00:17:40.325 "num_blocks": 2097152, 00:17:40.325 "uuid": "09082f83-316c-4596-aeb8-4f9b5afa1f71", 00:17:40.325 "assigned_rate_limits": { 00:17:40.325 "rw_ios_per_sec": 0, 00:17:40.325 "rw_mbytes_per_sec": 0, 00:17:40.325 "r_mbytes_per_sec": 0, 00:17:40.325 "w_mbytes_per_sec": 0 00:17:40.325 }, 00:17:40.325 "claimed": false, 00:17:40.325 "zoned": false, 00:17:40.325 "supported_io_types": { 00:17:40.325 "read": true, 00:17:40.325 "write": true, 00:17:40.325 "unmap": false, 00:17:40.325 "write_zeroes": true, 00:17:40.325 "flush": true, 00:17:40.325 "reset": true, 00:17:40.325 "compare": true, 00:17:40.325 "compare_and_write": true, 00:17:40.325 "abort": true, 00:17:40.325 "nvme_admin": true, 00:17:40.325 "nvme_io": true 00:17:40.325 }, 00:17:40.325 "memory_domains": [ 00:17:40.325 { 00:17:40.325 "dma_device_id": "system", 00:17:40.325 "dma_device_type": 1 00:17:40.325 } 00:17:40.325 ], 00:17:40.325 "driver_specific": { 00:17:40.325 "nvme": [ 00:17:40.325 { 00:17:40.325 "trid": { 00:17:40.325 "trtype": "TCP", 00:17:40.325 "adrfam": "IPv4", 00:17:40.325 "traddr": "10.0.0.2", 00:17:40.325 "trsvcid": "4420", 00:17:40.325 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:40.325 }, 00:17:40.325 "ctrlr_data": { 00:17:40.325 "cntlid": 2, 00:17:40.325 "vendor_id": "0x8086", 00:17:40.325 "model_number": "SPDK bdev Controller", 00:17:40.325 "serial_number": "00000000000000000000", 00:17:40.325 "firmware_revision": "24.05", 00:17:40.325 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:40.325 "oacs": { 00:17:40.325 "security": 0, 00:17:40.325 "format": 0, 00:17:40.325 "firmware": 0, 00:17:40.325 "ns_manage": 0 00:17:40.325 }, 00:17:40.325 "multi_ctrlr": true, 00:17:40.325 "ana_reporting": false 00:17:40.325 }, 00:17:40.325 "vs": { 00:17:40.325 "nvme_version": "1.3" 00:17:40.325 }, 00:17:40.325 "ns_data": { 00:17:40.325 "id": 1, 00:17:40.325 "can_share": true 00:17:40.325 } 00:17:40.325 } 00:17:40.325 ], 00:17:40.325 "mp_policy": "active_passive" 00:17:40.325 } 00:17:40.325 } 00:17:40.325 ] 00:17:40.325 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.325 01:04:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.325 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.325 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:40.583 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.583 01:04:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:17:40.583 01:04:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.bV9yXxOAGV 00:17:40.583 01:04:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:40.583 01:04:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.bV9yXxOAGV 00:17:40.583 01:04:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:17:40.583 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.583 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:40.583 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.583 01:04:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:17:40.583 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.583 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:40.583 [2024-05-15 01:04:52.736611] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:40.583 [2024-05-15 01:04:52.736739] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:40.583 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.583 01:04:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bV9yXxOAGV 00:17:40.583 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.583 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:40.583 [2024-05-15 01:04:52.744636] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:40.583 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.583 01:04:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bV9yXxOAGV 00:17:40.583 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.583 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:40.583 [2024-05-15 01:04:52.752649] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:40.583 [2024-05-15 01:04:52.752707] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:40.583 nvme0n1 00:17:40.583 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.583 01:04:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:40.583 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.583 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:40.583 [ 00:17:40.583 { 00:17:40.583 "name": "nvme0n1", 00:17:40.583 "aliases": [ 00:17:40.583 "09082f83-316c-4596-aeb8-4f9b5afa1f71" 00:17:40.583 ], 00:17:40.583 "product_name": "NVMe disk", 00:17:40.583 "block_size": 512, 00:17:40.583 "num_blocks": 2097152, 00:17:40.583 "uuid": "09082f83-316c-4596-aeb8-4f9b5afa1f71", 00:17:40.583 "assigned_rate_limits": { 00:17:40.583 "rw_ios_per_sec": 0, 00:17:40.583 "rw_mbytes_per_sec": 0, 00:17:40.583 "r_mbytes_per_sec": 0, 00:17:40.583 "w_mbytes_per_sec": 0 00:17:40.583 }, 00:17:40.583 "claimed": false, 00:17:40.583 "zoned": false, 00:17:40.583 "supported_io_types": { 00:17:40.583 "read": true, 00:17:40.583 "write": true, 00:17:40.583 "unmap": false, 00:17:40.583 "write_zeroes": true, 00:17:40.583 "flush": true, 00:17:40.583 "reset": true, 00:17:40.583 "compare": true, 00:17:40.583 "compare_and_write": true, 00:17:40.583 "abort": true, 00:17:40.583 "nvme_admin": true, 00:17:40.583 "nvme_io": true 00:17:40.583 }, 00:17:40.583 "memory_domains": [ 00:17:40.583 { 00:17:40.583 "dma_device_id": "system", 00:17:40.583 "dma_device_type": 1 00:17:40.583 } 00:17:40.583 ], 00:17:40.583 "driver_specific": { 00:17:40.583 "nvme": [ 00:17:40.583 { 00:17:40.583 "trid": { 00:17:40.583 "trtype": "TCP", 00:17:40.583 "adrfam": "IPv4", 00:17:40.583 "traddr": "10.0.0.2", 00:17:40.583 "trsvcid": "4421", 00:17:40.583 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:40.583 }, 00:17:40.583 "ctrlr_data": { 00:17:40.583 "cntlid": 3, 00:17:40.583 "vendor_id": "0x8086", 00:17:40.583 "model_number": "SPDK bdev Controller", 00:17:40.583 "serial_number": "00000000000000000000", 00:17:40.583 "firmware_revision": "24.05", 00:17:40.583 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:40.583 "oacs": { 00:17:40.583 "security": 0, 00:17:40.583 "format": 0, 00:17:40.583 "firmware": 0, 00:17:40.583 "ns_manage": 0 00:17:40.583 }, 00:17:40.583 "multi_ctrlr": true, 00:17:40.583 "ana_reporting": false 00:17:40.583 }, 00:17:40.583 "vs": { 00:17:40.583 "nvme_version": "1.3" 00:17:40.583 }, 00:17:40.583 "ns_data": { 00:17:40.583 "id": 1, 00:17:40.583 "can_share": true 00:17:40.583 } 00:17:40.583 } 00:17:40.583 ], 00:17:40.583 "mp_policy": "active_passive" 00:17:40.583 } 00:17:40.583 } 00:17:40.584 ] 00:17:40.584 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.584 01:04:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.584 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.584 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:40.584 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.584 01:04:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.bV9yXxOAGV 00:17:40.584 01:04:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:17:40.584 01:04:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:17:40.584 01:04:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:40.584 01:04:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:17:40.584 01:04:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:40.584 01:04:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:17:40.584 01:04:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:40.584 01:04:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:40.584 rmmod nvme_tcp 00:17:40.584 rmmod nvme_fabrics 00:17:40.584 rmmod nvme_keyring 00:17:40.584 01:04:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:40.584 01:04:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:17:40.584 01:04:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:17:40.584 01:04:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1288577 ']' 00:17:40.584 01:04:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1288577 00:17:40.584 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 1288577 ']' 00:17:40.584 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 1288577 00:17:40.584 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:17:40.584 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:40.584 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1288577 00:17:40.584 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:40.584 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:40.584 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1288577' 00:17:40.584 killing process with pid 1288577 00:17:40.584 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 1288577 00:17:40.584 [2024-05-15 01:04:52.929595] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:40.584 [2024-05-15 01:04:52.929632] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:40.584 [2024-05-15 01:04:52.929648] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:40.584 01:04:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 1288577 00:17:40.842 01:04:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:40.842 01:04:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:40.842 01:04:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:40.842 01:04:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:40.842 01:04:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:40.842 01:04:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.842 01:04:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.842 01:04:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.377 01:04:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:43.377 00:17:43.377 real 0m6.709s 00:17:43.377 user 0m3.149s 00:17:43.377 sys 0m2.178s 00:17:43.377 01:04:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:43.377 01:04:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:43.377 ************************************ 00:17:43.377 END TEST nvmf_async_init 00:17:43.377 ************************************ 00:17:43.377 01:04:55 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:43.377 01:04:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:43.377 01:04:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:43.377 01:04:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:43.377 ************************************ 00:17:43.377 START TEST dma 00:17:43.377 ************************************ 00:17:43.377 01:04:55 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:43.377 * Looking for test storage... 00:17:43.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:43.377 01:04:55 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:43.377 01:04:55 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:17:43.377 01:04:55 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.377 01:04:55 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.377 01:04:55 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.377 01:04:55 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.377 01:04:55 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.377 01:04:55 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.377 01:04:55 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.377 01:04:55 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.377 01:04:55 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.377 01:04:55 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.377 01:04:55 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:43.377 01:04:55 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:43.377 01:04:55 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.377 01:04:55 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.377 01:04:55 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:43.377 01:04:55 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:43.378 01:04:55 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:43.378 01:04:55 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.378 01:04:55 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.378 01:04:55 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.378 01:04:55 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.378 01:04:55 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.378 01:04:55 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.378 01:04:55 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:17:43.378 01:04:55 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.378 01:04:55 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:17:43.378 01:04:55 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:43.378 01:04:55 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:43.378 01:04:55 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:43.378 01:04:55 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.378 01:04:55 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.378 01:04:55 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:43.378 01:04:55 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:43.378 01:04:55 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:43.378 01:04:55 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:17:43.378 01:04:55 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:17:43.378 00:17:43.378 real 0m0.064s 00:17:43.378 user 0m0.029s 00:17:43.378 sys 0m0.041s 00:17:43.378 01:04:55 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:43.378 01:04:55 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:17:43.378 ************************************ 00:17:43.378 END TEST dma 00:17:43.378 ************************************ 00:17:43.378 01:04:55 nvmf_tcp -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:43.378 01:04:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:43.378 01:04:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:43.378 01:04:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:43.378 ************************************ 00:17:43.378 START TEST nvmf_identify 00:17:43.378 ************************************ 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:43.378 * Looking for test storage... 00:17:43.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:17:43.378 01:04:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:45.909 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:45.909 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:17:45.909 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:45.909 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:45.909 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:45.909 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:45.909 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:45.909 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:17:45.909 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:45.909 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:17:45.909 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:17:45.909 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:17:45.909 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:17:45.909 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:17:45.909 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:17:45.909 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:45.909 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:45.909 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:45.909 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:45.910 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:45.910 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:45.910 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:45.910 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:45.910 01:04:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:45.910 01:04:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:45.910 01:04:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:45.910 01:04:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:45.910 01:04:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:45.910 01:04:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:45.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:45.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:17:45.910 00:17:45.910 --- 10.0.0.2 ping statistics --- 00:17:45.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.910 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:17:45.910 01:04:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:45.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:45.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:17:45.910 00:17:45.910 --- 10.0.0.1 ping statistics --- 00:17:45.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.910 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:17:45.910 01:04:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:45.910 01:04:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:17:45.910 01:04:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:45.910 01:04:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:45.910 01:04:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:45.910 01:04:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:45.910 01:04:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:45.910 01:04:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:45.910 01:04:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:45.910 01:04:58 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:45.910 01:04:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:45.910 01:04:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:45.910 01:04:58 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1291120 00:17:45.910 01:04:58 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:45.910 01:04:58 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:45.910 01:04:58 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1291120 00:17:45.910 01:04:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 1291120 ']' 00:17:45.910 01:04:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.910 01:04:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:45.910 01:04:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.910 01:04:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:45.910 01:04:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:45.910 [2024-05-15 01:04:58.147228] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:17:45.910 [2024-05-15 01:04:58.147337] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.910 EAL: No free 2048 kB hugepages reported on node 1 00:17:45.910 [2024-05-15 01:04:58.228391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:46.169 [2024-05-15 01:04:58.337927] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.169 [2024-05-15 01:04:58.338012] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.169 [2024-05-15 01:04:58.338029] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:46.169 [2024-05-15 01:04:58.338050] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:46.169 [2024-05-15 01:04:58.338058] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.169 [2024-05-15 01:04:58.338125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.169 [2024-05-15 01:04:58.338182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:46.169 [2024-05-15 01:04:58.338247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:46.169 [2024-05-15 01:04:58.338249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.734 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:46.734 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:17:46.734 01:04:59 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:46.734 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.734 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:46.734 [2024-05-15 01:04:59.115978] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:46.734 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.734 01:04:59 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:46.734 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:46.734 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:46.994 01:04:59 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:46.994 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.994 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:46.994 Malloc0 00:17:46.994 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.994 01:04:59 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:46.994 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.994 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:46.994 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.994 01:04:59 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:46.994 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.994 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:46.994 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.994 01:04:59 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:46.994 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.994 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:46.994 [2024-05-15 01:04:59.192885] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:46.994 [2024-05-15 01:04:59.193204] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:46.994 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.994 01:04:59 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:46.994 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.994 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:46.994 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.995 01:04:59 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:46.995 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.995 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:46.995 [ 00:17:46.995 { 00:17:46.995 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:46.995 "subtype": "Discovery", 00:17:46.995 "listen_addresses": [ 00:17:46.995 { 00:17:46.995 "trtype": "TCP", 00:17:46.995 "adrfam": "IPv4", 00:17:46.995 "traddr": "10.0.0.2", 00:17:46.995 "trsvcid": "4420" 00:17:46.995 } 00:17:46.995 ], 00:17:46.995 "allow_any_host": true, 00:17:46.995 "hosts": [] 00:17:46.995 }, 00:17:46.995 { 00:17:46.995 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.995 "subtype": "NVMe", 00:17:46.995 "listen_addresses": [ 00:17:46.995 { 00:17:46.995 "trtype": "TCP", 00:17:46.995 "adrfam": "IPv4", 00:17:46.995 "traddr": "10.0.0.2", 00:17:46.995 "trsvcid": "4420" 00:17:46.995 } 00:17:46.995 ], 00:17:46.995 "allow_any_host": true, 00:17:46.995 "hosts": [], 00:17:46.995 "serial_number": "SPDK00000000000001", 00:17:46.995 "model_number": "SPDK bdev Controller", 00:17:46.995 "max_namespaces": 32, 00:17:46.995 "min_cntlid": 1, 00:17:46.995 "max_cntlid": 65519, 00:17:46.995 "namespaces": [ 00:17:46.995 { 00:17:46.995 "nsid": 1, 00:17:46.995 "bdev_name": "Malloc0", 00:17:46.995 "name": "Malloc0", 00:17:46.995 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:46.995 "eui64": "ABCDEF0123456789", 00:17:46.995 "uuid": "17133876-65b2-4ab9-a8eb-41f17de4347a" 00:17:46.995 } 00:17:46.995 ] 00:17:46.995 } 00:17:46.995 ] 00:17:46.995 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.995 01:04:59 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:46.995 [2024-05-15 01:04:59.235892] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:17:46.995 [2024-05-15 01:04:59.235953] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1291275 ] 00:17:46.995 EAL: No free 2048 kB hugepages reported on node 1 00:17:46.995 [2024-05-15 01:04:59.273821] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:17:46.995 [2024-05-15 01:04:59.273882] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:46.995 [2024-05-15 01:04:59.273892] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:46.995 [2024-05-15 01:04:59.273907] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:46.995 [2024-05-15 01:04:59.277960] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:46.995 [2024-05-15 01:04:59.278334] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:17:46.995 [2024-05-15 01:04:59.278397] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xb9ec80 0 00:17:46.995 [2024-05-15 01:04:59.292938] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:46.995 [2024-05-15 01:04:59.292958] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:46.995 [2024-05-15 01:04:59.292986] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:46.995 [2024-05-15 01:04:59.292994] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:46.995 [2024-05-15 01:04:59.293047] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.995 [2024-05-15 01:04:59.293059] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.995 [2024-05-15 01:04:59.293067] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb9ec80) 00:17:46.995 [2024-05-15 01:04:59.293085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:46.995 [2024-05-15 01:04:59.293112] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfde40, cid 0, qid 0 00:17:46.995 [2024-05-15 01:04:59.298942] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.995 [2024-05-15 01:04:59.298961] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.995 [2024-05-15 01:04:59.298968] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.995 [2024-05-15 01:04:59.298976] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbfde40) on tqpair=0xb9ec80 00:17:46.995 [2024-05-15 01:04:59.298992] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:46.995 [2024-05-15 01:04:59.299003] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:17:46.995 [2024-05-15 01:04:59.299018] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:17:46.995 [2024-05-15 01:04:59.299037] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.995 [2024-05-15 01:04:59.299046] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.995 [2024-05-15 01:04:59.299053] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb9ec80) 00:17:46.995 [2024-05-15 01:04:59.299064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.995 [2024-05-15 01:04:59.299088] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfde40, cid 0, qid 0 00:17:46.995 [2024-05-15 01:04:59.299276] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.995 [2024-05-15 01:04:59.299291] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.995 [2024-05-15 01:04:59.299299] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.995 [2024-05-15 01:04:59.299306] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbfde40) on tqpair=0xb9ec80 00:17:46.995 [2024-05-15 01:04:59.299315] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:17:46.995 [2024-05-15 01:04:59.299329] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:17:46.995 [2024-05-15 01:04:59.299341] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.995 [2024-05-15 01:04:59.299349] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.995 [2024-05-15 01:04:59.299355] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb9ec80) 00:17:46.995 [2024-05-15 01:04:59.299366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.995 [2024-05-15 01:04:59.299387] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfde40, cid 0, qid 0 00:17:46.995 [2024-05-15 01:04:59.299559] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.995 [2024-05-15 01:04:59.299571] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.995 [2024-05-15 01:04:59.299578] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.995 [2024-05-15 01:04:59.299585] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbfde40) on tqpair=0xb9ec80 00:17:46.995 [2024-05-15 01:04:59.299594] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:17:46.995 [2024-05-15 01:04:59.299608] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:17:46.995 [2024-05-15 01:04:59.299619] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.995 [2024-05-15 01:04:59.299627] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.995 [2024-05-15 01:04:59.299633] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb9ec80) 00:17:46.995 [2024-05-15 01:04:59.299644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.995 [2024-05-15 01:04:59.299665] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfde40, cid 0, qid 0 00:17:46.995 [2024-05-15 01:04:59.299817] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.995 [2024-05-15 01:04:59.299829] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.995 [2024-05-15 01:04:59.299836] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.995 [2024-05-15 01:04:59.299843] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbfde40) on tqpair=0xb9ec80 00:17:46.995 [2024-05-15 01:04:59.299852] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:46.995 [2024-05-15 01:04:59.299873] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.995 [2024-05-15 01:04:59.299882] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.995 [2024-05-15 01:04:59.299889] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb9ec80) 00:17:46.995 [2024-05-15 01:04:59.299900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.995 [2024-05-15 01:04:59.299920] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfde40, cid 0, qid 0 00:17:46.995 [2024-05-15 01:04:59.300081] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.995 [2024-05-15 01:04:59.300097] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.995 [2024-05-15 01:04:59.300104] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.995 [2024-05-15 01:04:59.300111] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbfde40) on tqpair=0xb9ec80 00:17:46.995 [2024-05-15 01:04:59.300119] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:17:46.995 [2024-05-15 01:04:59.300128] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:17:46.995 [2024-05-15 01:04:59.300141] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:46.995 [2024-05-15 01:04:59.300252] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:17:46.995 [2024-05-15 01:04:59.300275] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:46.995 [2024-05-15 01:04:59.300289] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.995 [2024-05-15 01:04:59.300296] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.995 [2024-05-15 01:04:59.300303] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb9ec80) 00:17:46.995 [2024-05-15 01:04:59.300313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.995 [2024-05-15 01:04:59.300334] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfde40, cid 0, qid 0 00:17:46.995 [2024-05-15 01:04:59.300522] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.995 [2024-05-15 01:04:59.300537] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.995 [2024-05-15 01:04:59.300544] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.995 [2024-05-15 01:04:59.300551] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbfde40) on tqpair=0xb9ec80 00:17:46.996 [2024-05-15 01:04:59.300560] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:46.996 [2024-05-15 01:04:59.300577] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.996 [2024-05-15 01:04:59.300586] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.996 [2024-05-15 01:04:59.300592] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb9ec80) 00:17:46.996 [2024-05-15 01:04:59.300603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.996 [2024-05-15 01:04:59.300623] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfde40, cid 0, qid 0 00:17:46.996 [2024-05-15 01:04:59.300793] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.996 [2024-05-15 01:04:59.300808] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.996 [2024-05-15 01:04:59.300815] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.996 [2024-05-15 01:04:59.300822] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbfde40) on tqpair=0xb9ec80 00:17:46.996 [2024-05-15 01:04:59.300830] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:46.996 [2024-05-15 01:04:59.300843] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:17:46.996 [2024-05-15 01:04:59.300857] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:17:46.996 [2024-05-15 01:04:59.300871] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:17:46.996 [2024-05-15 01:04:59.300886] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.996 [2024-05-15 01:04:59.300893] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb9ec80) 00:17:46.996 [2024-05-15 01:04:59.300905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.996 [2024-05-15 01:04:59.300926] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfde40, cid 0, qid 0 00:17:46.996 [2024-05-15 01:04:59.301130] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:46.996 [2024-05-15 01:04:59.301143] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:46.996 [2024-05-15 01:04:59.301151] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:46.996 [2024-05-15 01:04:59.301157] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb9ec80): datao=0, datal=4096, cccid=0 00:17:46.996 [2024-05-15 01:04:59.301165] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbfde40) on tqpair(0xb9ec80): expected_datao=0, payload_size=4096 00:17:46.996 [2024-05-15 01:04:59.301173] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.996 [2024-05-15 01:04:59.301212] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:46.996 [2024-05-15 01:04:59.301222] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:46.996 [2024-05-15 01:04:59.342096] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.996 [2024-05-15 01:04:59.342115] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.996 [2024-05-15 01:04:59.342123] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.996 [2024-05-15 01:04:59.342130] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbfde40) on tqpair=0xb9ec80 00:17:46.996 [2024-05-15 01:04:59.342143] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:17:46.996 [2024-05-15 01:04:59.342152] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:17:46.996 [2024-05-15 01:04:59.342159] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:17:46.996 [2024-05-15 01:04:59.342168] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:17:46.996 [2024-05-15 01:04:59.342176] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:17:46.996 [2024-05-15 01:04:59.342184] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:17:46.996 [2024-05-15 01:04:59.342205] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:17:46.996 [2024-05-15 01:04:59.342222] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.996 [2024-05-15 01:04:59.342230] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.996 [2024-05-15 01:04:59.342237] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb9ec80) 00:17:46.996 [2024-05-15 01:04:59.342249] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:46.996 [2024-05-15 01:04:59.342271] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfde40, cid 0, qid 0 00:17:46.996 [2024-05-15 01:04:59.342435] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.996 [2024-05-15 01:04:59.342451] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.996 [2024-05-15 01:04:59.342458] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.996 [2024-05-15 01:04:59.342465] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbfde40) on tqpair=0xb9ec80 00:17:46.996 [2024-05-15 01:04:59.342477] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.996 [2024-05-15 01:04:59.342485] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.996 [2024-05-15 01:04:59.342491] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb9ec80) 00:17:46.996 [2024-05-15 01:04:59.342501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.996 [2024-05-15 01:04:59.342512] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.996 [2024-05-15 01:04:59.342519] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.996 [2024-05-15 01:04:59.342525] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xb9ec80) 00:17:46.996 [2024-05-15 01:04:59.342534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.996 [2024-05-15 01:04:59.342543] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.996 [2024-05-15 01:04:59.342550] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.996 [2024-05-15 01:04:59.342557] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xb9ec80) 00:17:46.996 [2024-05-15 01:04:59.342565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.996 [2024-05-15 01:04:59.342575] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.996 [2024-05-15 01:04:59.342582] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.996 [2024-05-15 01:04:59.342588] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb9ec80) 00:17:46.996 [2024-05-15 01:04:59.342597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.996 [2024-05-15 01:04:59.342606] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:17:46.996 [2024-05-15 01:04:59.342625] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:46.996 [2024-05-15 01:04:59.342652] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.996 [2024-05-15 01:04:59.342660] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb9ec80) 00:17:46.996 [2024-05-15 01:04:59.342670] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.996 [2024-05-15 01:04:59.342692] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfde40, cid 0, qid 0 00:17:46.996 [2024-05-15 01:04:59.342718] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfdfa0, cid 1, qid 0 00:17:46.996 [2024-05-15 01:04:59.342726] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfe100, cid 2, qid 0 00:17:46.996 [2024-05-15 01:04:59.342734] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfe260, cid 3, qid 0 00:17:46.996 [2024-05-15 01:04:59.342742] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfe3c0, cid 4, qid 0 00:17:46.996 [2024-05-15 01:04:59.342928] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.996 [2024-05-15 01:04:59.346958] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.996 [2024-05-15 01:04:59.346969] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.996 [2024-05-15 01:04:59.346976] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbfe3c0) on tqpair=0xb9ec80 00:17:46.996 [2024-05-15 01:04:59.346989] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:17:46.996 [2024-05-15 01:04:59.346999] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:17:46.996 [2024-05-15 01:04:59.347018] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.996 [2024-05-15 01:04:59.347028] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb9ec80) 00:17:46.996 [2024-05-15 01:04:59.347039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.996 [2024-05-15 01:04:59.347060] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfe3c0, cid 4, qid 0 00:17:46.996 [2024-05-15 01:04:59.347527] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:46.996 [2024-05-15 01:04:59.347543] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:46.996 [2024-05-15 01:04:59.347550] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:46.996 [2024-05-15 01:04:59.347557] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb9ec80): datao=0, datal=4096, cccid=4 00:17:46.996 [2024-05-15 01:04:59.347565] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbfe3c0) on tqpair(0xb9ec80): expected_datao=0, payload_size=4096 00:17:46.996 [2024-05-15 01:04:59.347572] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.996 [2024-05-15 01:04:59.347618] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:46.996 [2024-05-15 01:04:59.347627] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:46.996 [2024-05-15 01:04:59.347745] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.996 [2024-05-15 01:04:59.347760] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.996 [2024-05-15 01:04:59.347767] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.996 [2024-05-15 01:04:59.347774] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbfe3c0) on tqpair=0xb9ec80 00:17:46.996 [2024-05-15 01:04:59.347793] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:17:46.996 [2024-05-15 01:04:59.347831] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.997 [2024-05-15 01:04:59.347841] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb9ec80) 00:17:46.997 [2024-05-15 01:04:59.347852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.997 [2024-05-15 01:04:59.347864] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.997 [2024-05-15 01:04:59.347871] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:46.997 [2024-05-15 01:04:59.347877] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb9ec80) 00:17:46.997 [2024-05-15 01:04:59.347887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.997 [2024-05-15 01:04:59.347914] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfe3c0, cid 4, qid 0 00:17:46.997 [2024-05-15 01:04:59.347925] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfe520, cid 5, qid 0 00:17:46.997 [2024-05-15 01:04:59.348162] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:46.997 [2024-05-15 01:04:59.348178] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:46.997 [2024-05-15 01:04:59.348186] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:46.997 [2024-05-15 01:04:59.348192] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb9ec80): datao=0, datal=1024, cccid=4 00:17:46.997 [2024-05-15 01:04:59.348200] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbfe3c0) on tqpair(0xb9ec80): expected_datao=0, payload_size=1024 00:17:46.997 [2024-05-15 01:04:59.348208] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:46.997 [2024-05-15 01:04:59.348217] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:46.997 [2024-05-15 01:04:59.348229] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:46.997 [2024-05-15 01:04:59.348239] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:46.997 [2024-05-15 01:04:59.348248] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:46.997 [2024-05-15 01:04:59.348255] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:46.997 [2024-05-15 01:04:59.348261] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbfe520) on tqpair=0xb9ec80 00:17:47.259 [2024-05-15 01:04:59.389096] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.259 [2024-05-15 01:04:59.389118] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.259 [2024-05-15 01:04:59.389126] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.259 [2024-05-15 01:04:59.389134] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbfe3c0) on tqpair=0xb9ec80 00:17:47.259 [2024-05-15 01:04:59.389153] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.259 [2024-05-15 01:04:59.389163] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb9ec80) 00:17:47.259 [2024-05-15 01:04:59.389175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.259 [2024-05-15 01:04:59.389205] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfe3c0, cid 4, qid 0 00:17:47.259 [2024-05-15 01:04:59.389384] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:47.259 [2024-05-15 01:04:59.389397] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:47.259 [2024-05-15 01:04:59.389405] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:47.259 [2024-05-15 01:04:59.389411] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb9ec80): datao=0, datal=3072, cccid=4 00:17:47.259 [2024-05-15 01:04:59.389419] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbfe3c0) on tqpair(0xb9ec80): expected_datao=0, payload_size=3072 00:17:47.259 [2024-05-15 01:04:59.389427] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.259 [2024-05-15 01:04:59.389471] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:47.259 [2024-05-15 01:04:59.389481] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:47.259 [2024-05-15 01:04:59.430075] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.259 [2024-05-15 01:04:59.430094] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.259 [2024-05-15 01:04:59.430102] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.259 [2024-05-15 01:04:59.430109] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbfe3c0) on tqpair=0xb9ec80 00:17:47.259 [2024-05-15 01:04:59.430126] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.259 [2024-05-15 01:04:59.430135] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb9ec80) 00:17:47.259 [2024-05-15 01:04:59.430147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.259 [2024-05-15 01:04:59.430176] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfe3c0, cid 4, qid 0 00:17:47.259 [2024-05-15 01:04:59.430348] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:47.259 [2024-05-15 01:04:59.430364] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:47.259 [2024-05-15 01:04:59.430371] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:47.259 [2024-05-15 01:04:59.430378] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb9ec80): datao=0, datal=8, cccid=4 00:17:47.259 [2024-05-15 01:04:59.430386] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbfe3c0) on tqpair(0xb9ec80): expected_datao=0, payload_size=8 00:17:47.259 [2024-05-15 01:04:59.430393] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.259 [2024-05-15 01:04:59.430403] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:47.259 [2024-05-15 01:04:59.430411] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:47.259 [2024-05-15 01:04:59.473945] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.259 [2024-05-15 01:04:59.473963] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.259 [2024-05-15 01:04:59.473971] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.259 [2024-05-15 01:04:59.473978] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbfe3c0) on tqpair=0xb9ec80 00:17:47.259 ===================================================== 00:17:47.259 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:47.259 ===================================================== 00:17:47.259 Controller Capabilities/Features 00:17:47.259 ================================ 00:17:47.259 Vendor ID: 0000 00:17:47.259 Subsystem Vendor ID: 0000 00:17:47.259 Serial Number: .................... 00:17:47.259 Model Number: ........................................ 00:17:47.259 Firmware Version: 24.05 00:17:47.259 Recommended Arb Burst: 0 00:17:47.259 IEEE OUI Identifier: 00 00 00 00:17:47.259 Multi-path I/O 00:17:47.259 May have multiple subsystem ports: No 00:17:47.259 May have multiple controllers: No 00:17:47.259 Associated with SR-IOV VF: No 00:17:47.259 Max Data Transfer Size: 131072 00:17:47.259 Max Number of Namespaces: 0 00:17:47.259 Max Number of I/O Queues: 1024 00:17:47.259 NVMe Specification Version (VS): 1.3 00:17:47.259 NVMe Specification Version (Identify): 1.3 00:17:47.259 Maximum Queue Entries: 128 00:17:47.259 Contiguous Queues Required: Yes 00:17:47.259 Arbitration Mechanisms Supported 00:17:47.259 Weighted Round Robin: Not Supported 00:17:47.259 Vendor Specific: Not Supported 00:17:47.259 Reset Timeout: 15000 ms 00:17:47.259 Doorbell Stride: 4 bytes 00:17:47.259 NVM Subsystem Reset: Not Supported 00:17:47.259 Command Sets Supported 00:17:47.259 NVM Command Set: Supported 00:17:47.259 Boot Partition: Not Supported 00:17:47.259 Memory Page Size Minimum: 4096 bytes 00:17:47.259 Memory Page Size Maximum: 4096 bytes 00:17:47.259 Persistent Memory Region: Not Supported 00:17:47.259 Optional Asynchronous Events Supported 00:17:47.259 Namespace Attribute Notices: Not Supported 00:17:47.259 Firmware Activation Notices: Not Supported 00:17:47.259 ANA Change Notices: Not Supported 00:17:47.259 PLE Aggregate Log Change Notices: Not Supported 00:17:47.259 LBA Status Info Alert Notices: Not Supported 00:17:47.259 EGE Aggregate Log Change Notices: Not Supported 00:17:47.259 Normal NVM Subsystem Shutdown event: Not Supported 00:17:47.259 Zone Descriptor Change Notices: Not Supported 00:17:47.259 Discovery Log Change Notices: Supported 00:17:47.259 Controller Attributes 00:17:47.259 128-bit Host Identifier: Not Supported 00:17:47.259 Non-Operational Permissive Mode: Not Supported 00:17:47.259 NVM Sets: Not Supported 00:17:47.259 Read Recovery Levels: Not Supported 00:17:47.260 Endurance Groups: Not Supported 00:17:47.260 Predictable Latency Mode: Not Supported 00:17:47.260 Traffic Based Keep ALive: Not Supported 00:17:47.260 Namespace Granularity: Not Supported 00:17:47.260 SQ Associations: Not Supported 00:17:47.260 UUID List: Not Supported 00:17:47.260 Multi-Domain Subsystem: Not Supported 00:17:47.260 Fixed Capacity Management: Not Supported 00:17:47.260 Variable Capacity Management: Not Supported 00:17:47.260 Delete Endurance Group: Not Supported 00:17:47.260 Delete NVM Set: Not Supported 00:17:47.260 Extended LBA Formats Supported: Not Supported 00:17:47.260 Flexible Data Placement Supported: Not Supported 00:17:47.260 00:17:47.260 Controller Memory Buffer Support 00:17:47.260 ================================ 00:17:47.260 Supported: No 00:17:47.260 00:17:47.260 Persistent Memory Region Support 00:17:47.260 ================================ 00:17:47.260 Supported: No 00:17:47.260 00:17:47.260 Admin Command Set Attributes 00:17:47.260 ============================ 00:17:47.260 Security Send/Receive: Not Supported 00:17:47.260 Format NVM: Not Supported 00:17:47.260 Firmware Activate/Download: Not Supported 00:17:47.260 Namespace Management: Not Supported 00:17:47.260 Device Self-Test: Not Supported 00:17:47.260 Directives: Not Supported 00:17:47.260 NVMe-MI: Not Supported 00:17:47.260 Virtualization Management: Not Supported 00:17:47.260 Doorbell Buffer Config: Not Supported 00:17:47.260 Get LBA Status Capability: Not Supported 00:17:47.260 Command & Feature Lockdown Capability: Not Supported 00:17:47.260 Abort Command Limit: 1 00:17:47.260 Async Event Request Limit: 4 00:17:47.260 Number of Firmware Slots: N/A 00:17:47.260 Firmware Slot 1 Read-Only: N/A 00:17:47.260 Firmware Activation Without Reset: N/A 00:17:47.260 Multiple Update Detection Support: N/A 00:17:47.260 Firmware Update Granularity: No Information Provided 00:17:47.260 Per-Namespace SMART Log: No 00:17:47.260 Asymmetric Namespace Access Log Page: Not Supported 00:17:47.260 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:47.260 Command Effects Log Page: Not Supported 00:17:47.260 Get Log Page Extended Data: Supported 00:17:47.260 Telemetry Log Pages: Not Supported 00:17:47.260 Persistent Event Log Pages: Not Supported 00:17:47.260 Supported Log Pages Log Page: May Support 00:17:47.260 Commands Supported & Effects Log Page: Not Supported 00:17:47.260 Feature Identifiers & Effects Log Page:May Support 00:17:47.260 NVMe-MI Commands & Effects Log Page: May Support 00:17:47.260 Data Area 4 for Telemetry Log: Not Supported 00:17:47.260 Error Log Page Entries Supported: 128 00:17:47.260 Keep Alive: Not Supported 00:17:47.260 00:17:47.260 NVM Command Set Attributes 00:17:47.260 ========================== 00:17:47.260 Submission Queue Entry Size 00:17:47.260 Max: 1 00:17:47.260 Min: 1 00:17:47.260 Completion Queue Entry Size 00:17:47.260 Max: 1 00:17:47.260 Min: 1 00:17:47.260 Number of Namespaces: 0 00:17:47.260 Compare Command: Not Supported 00:17:47.260 Write Uncorrectable Command: Not Supported 00:17:47.260 Dataset Management Command: Not Supported 00:17:47.260 Write Zeroes Command: Not Supported 00:17:47.260 Set Features Save Field: Not Supported 00:17:47.260 Reservations: Not Supported 00:17:47.260 Timestamp: Not Supported 00:17:47.260 Copy: Not Supported 00:17:47.260 Volatile Write Cache: Not Present 00:17:47.260 Atomic Write Unit (Normal): 1 00:17:47.260 Atomic Write Unit (PFail): 1 00:17:47.260 Atomic Compare & Write Unit: 1 00:17:47.260 Fused Compare & Write: Supported 00:17:47.260 Scatter-Gather List 00:17:47.260 SGL Command Set: Supported 00:17:47.260 SGL Keyed: Supported 00:17:47.260 SGL Bit Bucket Descriptor: Not Supported 00:17:47.260 SGL Metadata Pointer: Not Supported 00:17:47.260 Oversized SGL: Not Supported 00:17:47.260 SGL Metadata Address: Not Supported 00:17:47.260 SGL Offset: Supported 00:17:47.260 Transport SGL Data Block: Not Supported 00:17:47.260 Replay Protected Memory Block: Not Supported 00:17:47.260 00:17:47.260 Firmware Slot Information 00:17:47.260 ========================= 00:17:47.260 Active slot: 0 00:17:47.260 00:17:47.260 00:17:47.260 Error Log 00:17:47.260 ========= 00:17:47.260 00:17:47.260 Active Namespaces 00:17:47.260 ================= 00:17:47.260 Discovery Log Page 00:17:47.260 ================== 00:17:47.260 Generation Counter: 2 00:17:47.260 Number of Records: 2 00:17:47.260 Record Format: 0 00:17:47.260 00:17:47.260 Discovery Log Entry 0 00:17:47.260 ---------------------- 00:17:47.260 Transport Type: 3 (TCP) 00:17:47.260 Address Family: 1 (IPv4) 00:17:47.260 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:47.260 Entry Flags: 00:17:47.260 Duplicate Returned Information: 1 00:17:47.260 Explicit Persistent Connection Support for Discovery: 1 00:17:47.260 Transport Requirements: 00:17:47.260 Secure Channel: Not Required 00:17:47.260 Port ID: 0 (0x0000) 00:17:47.260 Controller ID: 65535 (0xffff) 00:17:47.260 Admin Max SQ Size: 128 00:17:47.260 Transport Service Identifier: 4420 00:17:47.260 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:47.260 Transport Address: 10.0.0.2 00:17:47.260 Discovery Log Entry 1 00:17:47.260 ---------------------- 00:17:47.260 Transport Type: 3 (TCP) 00:17:47.260 Address Family: 1 (IPv4) 00:17:47.260 Subsystem Type: 2 (NVM Subsystem) 00:17:47.260 Entry Flags: 00:17:47.260 Duplicate Returned Information: 0 00:17:47.260 Explicit Persistent Connection Support for Discovery: 0 00:17:47.260 Transport Requirements: 00:17:47.260 Secure Channel: Not Required 00:17:47.260 Port ID: 0 (0x0000) 00:17:47.260 Controller ID: 65535 (0xffff) 00:17:47.260 Admin Max SQ Size: 128 00:17:47.260 Transport Service Identifier: 4420 00:17:47.260 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:47.260 Transport Address: 10.0.0.2 [2024-05-15 01:04:59.474090] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:17:47.260 [2024-05-15 01:04:59.474115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.260 [2024-05-15 01:04:59.474127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.260 [2024-05-15 01:04:59.474137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.260 [2024-05-15 01:04:59.474147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.260 [2024-05-15 01:04:59.474160] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.260 [2024-05-15 01:04:59.474169] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.260 [2024-05-15 01:04:59.474176] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb9ec80) 00:17:47.260 [2024-05-15 01:04:59.474186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.260 [2024-05-15 01:04:59.474210] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfe260, cid 3, qid 0 00:17:47.260 [2024-05-15 01:04:59.474376] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.260 [2024-05-15 01:04:59.474392] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.260 [2024-05-15 01:04:59.474399] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.260 [2024-05-15 01:04:59.474406] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbfe260) on tqpair=0xb9ec80 00:17:47.260 [2024-05-15 01:04:59.474419] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.260 [2024-05-15 01:04:59.474427] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.260 [2024-05-15 01:04:59.474433] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb9ec80) 00:17:47.260 [2024-05-15 01:04:59.474444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.260 [2024-05-15 01:04:59.474470] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfe260, cid 3, qid 0 00:17:47.260 [2024-05-15 01:04:59.474667] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.260 [2024-05-15 01:04:59.474682] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.260 [2024-05-15 01:04:59.474689] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.260 [2024-05-15 01:04:59.474696] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbfe260) on tqpair=0xb9ec80 00:17:47.260 [2024-05-15 01:04:59.474705] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:17:47.260 [2024-05-15 01:04:59.474713] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:17:47.260 [2024-05-15 01:04:59.474730] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.260 [2024-05-15 01:04:59.474739] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.260 [2024-05-15 01:04:59.474745] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb9ec80) 00:17:47.260 [2024-05-15 01:04:59.474756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.260 [2024-05-15 01:04:59.474776] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfe260, cid 3, qid 0 00:17:47.260 [2024-05-15 01:04:59.474945] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.260 [2024-05-15 01:04:59.474960] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.260 [2024-05-15 01:04:59.474967] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.260 [2024-05-15 01:04:59.474974] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbfe260) on tqpair=0xb9ec80 00:17:47.261 [2024-05-15 01:04:59.474991] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.475001] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.475007] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb9ec80) 00:17:47.261 [2024-05-15 01:04:59.475018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.261 [2024-05-15 01:04:59.475039] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfe260, cid 3, qid 0 00:17:47.261 [2024-05-15 01:04:59.475198] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.261 [2024-05-15 01:04:59.475214] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.261 [2024-05-15 01:04:59.475221] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.475228] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbfe260) on tqpair=0xb9ec80 00:17:47.261 [2024-05-15 01:04:59.475245] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.475254] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.475261] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb9ec80) 00:17:47.261 [2024-05-15 01:04:59.475271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.261 [2024-05-15 01:04:59.475292] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfe260, cid 3, qid 0 00:17:47.261 [2024-05-15 01:04:59.475453] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.261 [2024-05-15 01:04:59.475465] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.261 [2024-05-15 01:04:59.475472] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.475479] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbfe260) on tqpair=0xb9ec80 00:17:47.261 [2024-05-15 01:04:59.475495] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.475505] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.475511] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb9ec80) 00:17:47.261 [2024-05-15 01:04:59.475522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.261 [2024-05-15 01:04:59.475542] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfe260, cid 3, qid 0 00:17:47.261 [2024-05-15 01:04:59.475695] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.261 [2024-05-15 01:04:59.475707] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.261 [2024-05-15 01:04:59.475714] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.475721] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbfe260) on tqpair=0xb9ec80 00:17:47.261 [2024-05-15 01:04:59.475737] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.475746] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.475753] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb9ec80) 00:17:47.261 [2024-05-15 01:04:59.475763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.261 [2024-05-15 01:04:59.475783] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfe260, cid 3, qid 0 00:17:47.261 [2024-05-15 01:04:59.475942] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.261 [2024-05-15 01:04:59.475960] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.261 [2024-05-15 01:04:59.475968] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.475975] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbfe260) on tqpair=0xb9ec80 00:17:47.261 [2024-05-15 01:04:59.475991] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.476001] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.476007] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb9ec80) 00:17:47.261 [2024-05-15 01:04:59.476017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.261 [2024-05-15 01:04:59.476038] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfe260, cid 3, qid 0 00:17:47.261 [2024-05-15 01:04:59.476194] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.261 [2024-05-15 01:04:59.476207] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.261 [2024-05-15 01:04:59.476214] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.476220] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbfe260) on tqpair=0xb9ec80 00:17:47.261 [2024-05-15 01:04:59.476237] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.476246] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.476253] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb9ec80) 00:17:47.261 [2024-05-15 01:04:59.476263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.261 [2024-05-15 01:04:59.476283] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfe260, cid 3, qid 0 00:17:47.261 [2024-05-15 01:04:59.476436] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.261 [2024-05-15 01:04:59.476449] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.261 [2024-05-15 01:04:59.476456] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.476462] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbfe260) on tqpair=0xb9ec80 00:17:47.261 [2024-05-15 01:04:59.476479] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.476488] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.476495] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb9ec80) 00:17:47.261 [2024-05-15 01:04:59.476505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.261 [2024-05-15 01:04:59.476525] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfe260, cid 3, qid 0 00:17:47.261 [2024-05-15 01:04:59.476677] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.261 [2024-05-15 01:04:59.476690] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.261 [2024-05-15 01:04:59.476697] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.476704] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbfe260) on tqpair=0xb9ec80 00:17:47.261 [2024-05-15 01:04:59.476720] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.476729] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.476735] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb9ec80) 00:17:47.261 [2024-05-15 01:04:59.476746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.261 [2024-05-15 01:04:59.476766] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfe260, cid 3, qid 0 00:17:47.261 [2024-05-15 01:04:59.476919] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.261 [2024-05-15 01:04:59.476941] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.261 [2024-05-15 01:04:59.476954] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.476961] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbfe260) on tqpair=0xb9ec80 00:17:47.261 [2024-05-15 01:04:59.476978] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.476988] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.476995] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb9ec80) 00:17:47.261 [2024-05-15 01:04:59.477005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.261 [2024-05-15 01:04:59.477026] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfe260, cid 3, qid 0 00:17:47.261 [2024-05-15 01:04:59.477186] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.261 [2024-05-15 01:04:59.477201] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.261 [2024-05-15 01:04:59.477209] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.477215] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbfe260) on tqpair=0xb9ec80 00:17:47.261 [2024-05-15 01:04:59.477232] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.477241] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.477248] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb9ec80) 00:17:47.261 [2024-05-15 01:04:59.477258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.261 [2024-05-15 01:04:59.477278] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfe260, cid 3, qid 0 00:17:47.261 [2024-05-15 01:04:59.477431] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.261 [2024-05-15 01:04:59.477446] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.261 [2024-05-15 01:04:59.477453] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.477460] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbfe260) on tqpair=0xb9ec80 00:17:47.261 [2024-05-15 01:04:59.477477] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.477487] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.477493] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb9ec80) 00:17:47.261 [2024-05-15 01:04:59.477504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.261 [2024-05-15 01:04:59.477524] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfe260, cid 3, qid 0 00:17:47.261 [2024-05-15 01:04:59.477675] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.261 [2024-05-15 01:04:59.477688] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.261 [2024-05-15 01:04:59.477695] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.261 [2024-05-15 01:04:59.477702] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbfe260) on tqpair=0xb9ec80 00:17:47.262 [2024-05-15 01:04:59.477718] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.262 [2024-05-15 01:04:59.477727] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.262 [2024-05-15 01:04:59.477733] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb9ec80) 00:17:47.262 [2024-05-15 01:04:59.477744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.262 [2024-05-15 01:04:59.477763] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfe260, cid 3, qid 0 00:17:47.262 [2024-05-15 01:04:59.477916] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.262 [2024-05-15 01:04:59.481938] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.262 [2024-05-15 01:04:59.481952] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.262 [2024-05-15 01:04:59.481963] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbfe260) on tqpair=0xb9ec80 00:17:47.262 [2024-05-15 01:04:59.481982] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.262 [2024-05-15 01:04:59.481991] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.262 [2024-05-15 01:04:59.481997] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb9ec80) 00:17:47.262 [2024-05-15 01:04:59.482008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.262 [2024-05-15 01:04:59.482029] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbfe260, cid 3, qid 0 00:17:47.262 [2024-05-15 01:04:59.482209] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.262 [2024-05-15 01:04:59.482222] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.262 [2024-05-15 01:04:59.482229] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.262 [2024-05-15 01:04:59.482236] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xbfe260) on tqpair=0xb9ec80 00:17:47.262 [2024-05-15 01:04:59.482249] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:17:47.262 00:17:47.262 01:04:59 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:47.262 [2024-05-15 01:04:59.516878] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:17:47.262 [2024-05-15 01:04:59.516948] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1291283 ] 00:17:47.262 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.262 [2024-05-15 01:04:59.551847] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:17:47.262 [2024-05-15 01:04:59.551898] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:47.262 [2024-05-15 01:04:59.551908] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:47.262 [2024-05-15 01:04:59.551942] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:47.262 [2024-05-15 01:04:59.551956] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:47.262 [2024-05-15 01:04:59.552250] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:17:47.262 [2024-05-15 01:04:59.552291] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x13cdc80 0 00:17:47.262 [2024-05-15 01:04:59.566939] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:47.262 [2024-05-15 01:04:59.566960] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:47.262 [2024-05-15 01:04:59.566973] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:47.262 [2024-05-15 01:04:59.566980] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:47.262 [2024-05-15 01:04:59.567022] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.262 [2024-05-15 01:04:59.567035] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.262 [2024-05-15 01:04:59.567042] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13cdc80) 00:17:47.262 [2024-05-15 01:04:59.567056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:47.262 [2024-05-15 01:04:59.567083] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142ce40, cid 0, qid 0 00:17:47.262 [2024-05-15 01:04:59.574945] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.262 [2024-05-15 01:04:59.574964] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.262 [2024-05-15 01:04:59.574972] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.262 [2024-05-15 01:04:59.574994] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142ce40) on tqpair=0x13cdc80 00:17:47.262 [2024-05-15 01:04:59.575011] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:47.262 [2024-05-15 01:04:59.575025] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:17:47.262 [2024-05-15 01:04:59.575035] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:17:47.262 [2024-05-15 01:04:59.575052] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.262 [2024-05-15 01:04:59.575061] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.262 [2024-05-15 01:04:59.575067] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13cdc80) 00:17:47.262 [2024-05-15 01:04:59.575079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.262 [2024-05-15 01:04:59.575103] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142ce40, cid 0, qid 0 00:17:47.262 [2024-05-15 01:04:59.575301] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.262 [2024-05-15 01:04:59.575318] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.262 [2024-05-15 01:04:59.575325] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.262 [2024-05-15 01:04:59.575332] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142ce40) on tqpair=0x13cdc80 00:17:47.262 [2024-05-15 01:04:59.575341] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:17:47.262 [2024-05-15 01:04:59.575356] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:17:47.262 [2024-05-15 01:04:59.575371] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.262 [2024-05-15 01:04:59.575379] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.262 [2024-05-15 01:04:59.575386] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13cdc80) 00:17:47.262 [2024-05-15 01:04:59.575397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.262 [2024-05-15 01:04:59.575434] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142ce40, cid 0, qid 0 00:17:47.262 [2024-05-15 01:04:59.575670] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.262 [2024-05-15 01:04:59.575686] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.262 [2024-05-15 01:04:59.575693] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.262 [2024-05-15 01:04:59.575700] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142ce40) on tqpair=0x13cdc80 00:17:47.262 [2024-05-15 01:04:59.575710] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:17:47.262 [2024-05-15 01:04:59.575725] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:17:47.262 [2024-05-15 01:04:59.575739] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.262 [2024-05-15 01:04:59.575747] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.262 [2024-05-15 01:04:59.575753] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13cdc80) 00:17:47.262 [2024-05-15 01:04:59.575764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.262 [2024-05-15 01:04:59.575785] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142ce40, cid 0, qid 0 00:17:47.262 [2024-05-15 01:04:59.576061] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.262 [2024-05-15 01:04:59.576082] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.262 [2024-05-15 01:04:59.576090] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.262 [2024-05-15 01:04:59.576096] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142ce40) on tqpair=0x13cdc80 00:17:47.262 [2024-05-15 01:04:59.576106] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:47.262 [2024-05-15 01:04:59.576126] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.262 [2024-05-15 01:04:59.576137] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.262 [2024-05-15 01:04:59.576143] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13cdc80) 00:17:47.262 [2024-05-15 01:04:59.576154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.262 [2024-05-15 01:04:59.576176] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142ce40, cid 0, qid 0 00:17:47.262 [2024-05-15 01:04:59.576365] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.262 [2024-05-15 01:04:59.576383] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.262 [2024-05-15 01:04:59.576392] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.262 [2024-05-15 01:04:59.576398] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142ce40) on tqpair=0x13cdc80 00:17:47.262 [2024-05-15 01:04:59.576407] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:17:47.262 [2024-05-15 01:04:59.576416] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:17:47.262 [2024-05-15 01:04:59.576430] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:47.262 [2024-05-15 01:04:59.576542] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:17:47.262 [2024-05-15 01:04:59.576550] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:47.262 [2024-05-15 01:04:59.576562] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.262 [2024-05-15 01:04:59.576584] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.262 [2024-05-15 01:04:59.576590] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13cdc80) 00:17:47.262 [2024-05-15 01:04:59.576600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.262 [2024-05-15 01:04:59.576620] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142ce40, cid 0, qid 0 00:17:47.262 [2024-05-15 01:04:59.576854] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.263 [2024-05-15 01:04:59.576870] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.263 [2024-05-15 01:04:59.576877] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.263 [2024-05-15 01:04:59.576884] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142ce40) on tqpair=0x13cdc80 00:17:47.263 [2024-05-15 01:04:59.576893] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:47.263 [2024-05-15 01:04:59.576913] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.263 [2024-05-15 01:04:59.576923] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.263 [2024-05-15 01:04:59.576941] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13cdc80) 00:17:47.263 [2024-05-15 01:04:59.576953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.263 [2024-05-15 01:04:59.576975] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142ce40, cid 0, qid 0 00:17:47.263 [2024-05-15 01:04:59.577200] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.263 [2024-05-15 01:04:59.577217] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.263 [2024-05-15 01:04:59.577224] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.263 [2024-05-15 01:04:59.577231] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142ce40) on tqpair=0x13cdc80 00:17:47.263 [2024-05-15 01:04:59.577239] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:47.263 [2024-05-15 01:04:59.577248] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:17:47.263 [2024-05-15 01:04:59.577263] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:17:47.263 [2024-05-15 01:04:59.577283] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:17:47.263 [2024-05-15 01:04:59.577313] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.263 [2024-05-15 01:04:59.577321] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13cdc80) 00:17:47.263 [2024-05-15 01:04:59.577331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.263 [2024-05-15 01:04:59.577352] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142ce40, cid 0, qid 0 00:17:47.263 [2024-05-15 01:04:59.577603] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:47.263 [2024-05-15 01:04:59.577623] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:47.263 [2024-05-15 01:04:59.577635] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:47.263 [2024-05-15 01:04:59.577645] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13cdc80): datao=0, datal=4096, cccid=0 00:17:47.263 [2024-05-15 01:04:59.577656] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x142ce40) on tqpair(0x13cdc80): expected_datao=0, payload_size=4096 00:17:47.263 [2024-05-15 01:04:59.577667] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.263 [2024-05-15 01:04:59.577717] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:47.263 [2024-05-15 01:04:59.577731] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:47.263 [2024-05-15 01:04:59.621944] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.263 [2024-05-15 01:04:59.621963] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.263 [2024-05-15 01:04:59.621971] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.263 [2024-05-15 01:04:59.621978] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142ce40) on tqpair=0x13cdc80 00:17:47.263 [2024-05-15 01:04:59.621991] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:17:47.263 [2024-05-15 01:04:59.622000] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:17:47.263 [2024-05-15 01:04:59.622007] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:17:47.263 [2024-05-15 01:04:59.622014] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:17:47.263 [2024-05-15 01:04:59.622022] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:17:47.263 [2024-05-15 01:04:59.622030] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:17:47.263 [2024-05-15 01:04:59.622051] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:17:47.263 [2024-05-15 01:04:59.622068] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.263 [2024-05-15 01:04:59.622077] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.263 [2024-05-15 01:04:59.622086] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13cdc80) 00:17:47.263 [2024-05-15 01:04:59.622099] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:47.263 [2024-05-15 01:04:59.622122] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142ce40, cid 0, qid 0 00:17:47.263 [2024-05-15 01:04:59.622349] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.263 [2024-05-15 01:04:59.622366] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.263 [2024-05-15 01:04:59.622373] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.263 [2024-05-15 01:04:59.622380] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142ce40) on tqpair=0x13cdc80 00:17:47.263 [2024-05-15 01:04:59.622392] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.263 [2024-05-15 01:04:59.622400] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.263 [2024-05-15 01:04:59.622406] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13cdc80) 00:17:47.263 [2024-05-15 01:04:59.622417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.263 [2024-05-15 01:04:59.622427] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.263 [2024-05-15 01:04:59.622434] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.263 [2024-05-15 01:04:59.622455] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x13cdc80) 00:17:47.263 [2024-05-15 01:04:59.622464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.263 [2024-05-15 01:04:59.622474] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.263 [2024-05-15 01:04:59.622481] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.263 [2024-05-15 01:04:59.622487] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x13cdc80) 00:17:47.263 [2024-05-15 01:04:59.622495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.263 [2024-05-15 01:04:59.622505] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.263 [2024-05-15 01:04:59.622511] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.263 [2024-05-15 01:04:59.622532] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cdc80) 00:17:47.263 [2024-05-15 01:04:59.622540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.263 [2024-05-15 01:04:59.622549] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:47.263 [2024-05-15 01:04:59.622569] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:47.263 [2024-05-15 01:04:59.622582] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.263 [2024-05-15 01:04:59.622589] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13cdc80) 00:17:47.263 [2024-05-15 01:04:59.622599] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.263 [2024-05-15 01:04:59.622621] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142ce40, cid 0, qid 0 00:17:47.263 [2024-05-15 01:04:59.622647] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142cfa0, cid 1, qid 0 00:17:47.263 [2024-05-15 01:04:59.622655] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142d100, cid 2, qid 0 00:17:47.263 [2024-05-15 01:04:59.622662] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142d260, cid 3, qid 0 00:17:47.263 [2024-05-15 01:04:59.622670] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142d3c0, cid 4, qid 0 00:17:47.263 [2024-05-15 01:04:59.622975] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.263 [2024-05-15 01:04:59.622992] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.263 [2024-05-15 01:04:59.622999] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.263 [2024-05-15 01:04:59.623006] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142d3c0) on tqpair=0x13cdc80 00:17:47.263 [2024-05-15 01:04:59.623015] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:17:47.263 [2024-05-15 01:04:59.623024] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:47.263 [2024-05-15 01:04:59.623039] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:17:47.263 [2024-05-15 01:04:59.623057] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:47.263 [2024-05-15 01:04:59.623069] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.263 [2024-05-15 01:04:59.623077] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.263 [2024-05-15 01:04:59.623083] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13cdc80) 00:17:47.263 [2024-05-15 01:04:59.623094] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:47.263 [2024-05-15 01:04:59.623116] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142d3c0, cid 4, qid 0 00:17:47.263 [2024-05-15 01:04:59.623335] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.263 [2024-05-15 01:04:59.623351] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.263 [2024-05-15 01:04:59.623358] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.263 [2024-05-15 01:04:59.623364] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142d3c0) on tqpair=0x13cdc80 00:17:47.263 [2024-05-15 01:04:59.623423] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:17:47.264 [2024-05-15 01:04:59.623460] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:47.264 [2024-05-15 01:04:59.623476] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.264 [2024-05-15 01:04:59.623483] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13cdc80) 00:17:47.264 [2024-05-15 01:04:59.623494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.264 [2024-05-15 01:04:59.623530] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142d3c0, cid 4, qid 0 00:17:47.264 [2024-05-15 01:04:59.623815] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:47.264 [2024-05-15 01:04:59.623835] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:47.264 [2024-05-15 01:04:59.623847] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:47.264 [2024-05-15 01:04:59.623856] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13cdc80): datao=0, datal=4096, cccid=4 00:17:47.264 [2024-05-15 01:04:59.623867] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x142d3c0) on tqpair(0x13cdc80): expected_datao=0, payload_size=4096 00:17:47.264 [2024-05-15 01:04:59.623878] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.264 [2024-05-15 01:04:59.623936] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:47.264 [2024-05-15 01:04:59.623949] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:47.524 [2024-05-15 01:04:59.664174] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.524 [2024-05-15 01:04:59.664200] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.524 [2024-05-15 01:04:59.664208] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.524 [2024-05-15 01:04:59.664219] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142d3c0) on tqpair=0x13cdc80 00:17:47.524 [2024-05-15 01:04:59.664242] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:17:47.524 [2024-05-15 01:04:59.664262] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:17:47.524 [2024-05-15 01:04:59.664283] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:17:47.524 [2024-05-15 01:04:59.664300] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.524 [2024-05-15 01:04:59.664309] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13cdc80) 00:17:47.524 [2024-05-15 01:04:59.664321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.524 [2024-05-15 01:04:59.664346] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142d3c0, cid 4, qid 0 00:17:47.524 [2024-05-15 01:04:59.664545] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:47.524 [2024-05-15 01:04:59.664566] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:47.524 [2024-05-15 01:04:59.664578] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:47.524 [2024-05-15 01:04:59.664588] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13cdc80): datao=0, datal=4096, cccid=4 00:17:47.524 [2024-05-15 01:04:59.664599] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x142d3c0) on tqpair(0x13cdc80): expected_datao=0, payload_size=4096 00:17:47.524 [2024-05-15 01:04:59.664610] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.524 [2024-05-15 01:04:59.664661] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:47.524 [2024-05-15 01:04:59.664675] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:47.524 [2024-05-15 01:04:59.708959] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.524 [2024-05-15 01:04:59.708979] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.524 [2024-05-15 01:04:59.708987] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.524 [2024-05-15 01:04:59.708994] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142d3c0) on tqpair=0x13cdc80 00:17:47.524 [2024-05-15 01:04:59.709012] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:47.524 [2024-05-15 01:04:59.709032] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:47.524 [2024-05-15 01:04:59.709051] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.524 [2024-05-15 01:04:59.709060] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13cdc80) 00:17:47.524 [2024-05-15 01:04:59.709071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.524 [2024-05-15 01:04:59.709095] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142d3c0, cid 4, qid 0 00:17:47.524 [2024-05-15 01:04:59.709325] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:47.524 [2024-05-15 01:04:59.709346] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:47.524 [2024-05-15 01:04:59.709358] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:47.524 [2024-05-15 01:04:59.709368] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13cdc80): datao=0, datal=4096, cccid=4 00:17:47.524 [2024-05-15 01:04:59.709379] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x142d3c0) on tqpair(0x13cdc80): expected_datao=0, payload_size=4096 00:17:47.524 [2024-05-15 01:04:59.709391] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.524 [2024-05-15 01:04:59.709440] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:47.524 [2024-05-15 01:04:59.709454] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:47.524 [2024-05-15 01:04:59.750192] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.524 [2024-05-15 01:04:59.750216] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.524 [2024-05-15 01:04:59.750223] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.524 [2024-05-15 01:04:59.750230] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142d3c0) on tqpair=0x13cdc80 00:17:47.524 [2024-05-15 01:04:59.750251] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:47.524 [2024-05-15 01:04:59.750269] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:17:47.524 [2024-05-15 01:04:59.750287] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:17:47.524 [2024-05-15 01:04:59.750298] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:47.524 [2024-05-15 01:04:59.750306] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:17:47.524 [2024-05-15 01:04:59.750315] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:17:47.524 [2024-05-15 01:04:59.750323] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:17:47.524 [2024-05-15 01:04:59.750331] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:17:47.524 [2024-05-15 01:04:59.750358] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.524 [2024-05-15 01:04:59.750367] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13cdc80) 00:17:47.524 [2024-05-15 01:04:59.750379] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.524 [2024-05-15 01:04:59.750390] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.524 [2024-05-15 01:04:59.750398] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.524 [2024-05-15 01:04:59.750404] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13cdc80) 00:17:47.525 [2024-05-15 01:04:59.750414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.525 [2024-05-15 01:04:59.750440] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142d3c0, cid 4, qid 0 00:17:47.525 [2024-05-15 01:04:59.750451] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142d520, cid 5, qid 0 00:17:47.525 [2024-05-15 01:04:59.750657] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.525 [2024-05-15 01:04:59.750674] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.525 [2024-05-15 01:04:59.750681] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.750688] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142d3c0) on tqpair=0x13cdc80 00:17:47.525 [2024-05-15 01:04:59.750699] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.525 [2024-05-15 01:04:59.750709] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.525 [2024-05-15 01:04:59.750730] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.750737] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142d520) on tqpair=0x13cdc80 00:17:47.525 [2024-05-15 01:04:59.750755] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.750766] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13cdc80) 00:17:47.525 [2024-05-15 01:04:59.750776] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.525 [2024-05-15 01:04:59.750801] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142d520, cid 5, qid 0 00:17:47.525 [2024-05-15 01:04:59.751021] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.525 [2024-05-15 01:04:59.751038] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.525 [2024-05-15 01:04:59.751045] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.751052] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142d520) on tqpair=0x13cdc80 00:17:47.525 [2024-05-15 01:04:59.751072] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.751083] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13cdc80) 00:17:47.525 [2024-05-15 01:04:59.751094] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.525 [2024-05-15 01:04:59.751115] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142d520, cid 5, qid 0 00:17:47.525 [2024-05-15 01:04:59.751337] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.525 [2024-05-15 01:04:59.751354] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.525 [2024-05-15 01:04:59.751361] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.751367] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142d520) on tqpair=0x13cdc80 00:17:47.525 [2024-05-15 01:04:59.751387] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.751399] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13cdc80) 00:17:47.525 [2024-05-15 01:04:59.751410] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.525 [2024-05-15 01:04:59.751445] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142d520, cid 5, qid 0 00:17:47.525 [2024-05-15 01:04:59.751699] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.525 [2024-05-15 01:04:59.751715] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.525 [2024-05-15 01:04:59.751722] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.751729] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142d520) on tqpair=0x13cdc80 00:17:47.525 [2024-05-15 01:04:59.751752] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.751763] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13cdc80) 00:17:47.525 [2024-05-15 01:04:59.751774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.525 [2024-05-15 01:04:59.751787] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.751794] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13cdc80) 00:17:47.525 [2024-05-15 01:04:59.751803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.525 [2024-05-15 01:04:59.751829] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.751837] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x13cdc80) 00:17:47.525 [2024-05-15 01:04:59.751847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.525 [2024-05-15 01:04:59.751863] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.751871] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x13cdc80) 00:17:47.525 [2024-05-15 01:04:59.751880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.525 [2024-05-15 01:04:59.751919] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142d520, cid 5, qid 0 00:17:47.525 [2024-05-15 01:04:59.751940] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142d3c0, cid 4, qid 0 00:17:47.525 [2024-05-15 01:04:59.751949] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142d680, cid 6, qid 0 00:17:47.525 [2024-05-15 01:04:59.751956] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142d7e0, cid 7, qid 0 00:17:47.525 [2024-05-15 01:04:59.752300] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:47.525 [2024-05-15 01:04:59.752320] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:47.525 [2024-05-15 01:04:59.752332] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.752342] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13cdc80): datao=0, datal=8192, cccid=5 00:17:47.525 [2024-05-15 01:04:59.752368] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x142d520) on tqpair(0x13cdc80): expected_datao=0, payload_size=8192 00:17:47.525 [2024-05-15 01:04:59.752379] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.752562] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.752576] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.752585] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:47.525 [2024-05-15 01:04:59.752594] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:47.525 [2024-05-15 01:04:59.752602] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.752613] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13cdc80): datao=0, datal=512, cccid=4 00:17:47.525 [2024-05-15 01:04:59.752624] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x142d3c0) on tqpair(0x13cdc80): expected_datao=0, payload_size=512 00:17:47.525 [2024-05-15 01:04:59.752635] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.752650] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.752662] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.752675] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:47.525 [2024-05-15 01:04:59.752688] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:47.525 [2024-05-15 01:04:59.752698] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.752708] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13cdc80): datao=0, datal=512, cccid=6 00:17:47.525 [2024-05-15 01:04:59.752719] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x142d680) on tqpair(0x13cdc80): expected_datao=0, payload_size=512 00:17:47.525 [2024-05-15 01:04:59.752731] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.752742] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.752749] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.752758] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:47.525 [2024-05-15 01:04:59.752767] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:47.525 [2024-05-15 01:04:59.752773] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.752779] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13cdc80): datao=0, datal=4096, cccid=7 00:17:47.525 [2024-05-15 01:04:59.752787] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x142d7e0) on tqpair(0x13cdc80): expected_datao=0, payload_size=4096 00:17:47.525 [2024-05-15 01:04:59.752794] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.752804] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.752811] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.752823] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.525 [2024-05-15 01:04:59.752832] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.525 [2024-05-15 01:04:59.752859] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.752866] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142d520) on tqpair=0x13cdc80 00:17:47.525 [2024-05-15 01:04:59.752886] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.525 [2024-05-15 01:04:59.752897] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.525 [2024-05-15 01:04:59.752904] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.752925] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142d3c0) on tqpair=0x13cdc80 00:17:47.525 [2024-05-15 01:04:59.756968] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.525 [2024-05-15 01:04:59.756982] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.525 [2024-05-15 01:04:59.756988] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.756995] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142d680) on tqpair=0x13cdc80 00:17:47.525 [2024-05-15 01:04:59.757010] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.525 [2024-05-15 01:04:59.757020] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.525 [2024-05-15 01:04:59.757026] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.525 [2024-05-15 01:04:59.757032] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142d7e0) on tqpair=0x13cdc80 00:17:47.525 ===================================================== 00:17:47.525 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:47.525 ===================================================== 00:17:47.525 Controller Capabilities/Features 00:17:47.525 ================================ 00:17:47.525 Vendor ID: 8086 00:17:47.525 Subsystem Vendor ID: 8086 00:17:47.525 Serial Number: SPDK00000000000001 00:17:47.525 Model Number: SPDK bdev Controller 00:17:47.525 Firmware Version: 24.05 00:17:47.525 Recommended Arb Burst: 6 00:17:47.525 IEEE OUI Identifier: e4 d2 5c 00:17:47.525 Multi-path I/O 00:17:47.525 May have multiple subsystem ports: Yes 00:17:47.525 May have multiple controllers: Yes 00:17:47.525 Associated with SR-IOV VF: No 00:17:47.526 Max Data Transfer Size: 131072 00:17:47.526 Max Number of Namespaces: 32 00:17:47.526 Max Number of I/O Queues: 127 00:17:47.526 NVMe Specification Version (VS): 1.3 00:17:47.526 NVMe Specification Version (Identify): 1.3 00:17:47.526 Maximum Queue Entries: 128 00:17:47.526 Contiguous Queues Required: Yes 00:17:47.526 Arbitration Mechanisms Supported 00:17:47.526 Weighted Round Robin: Not Supported 00:17:47.526 Vendor Specific: Not Supported 00:17:47.526 Reset Timeout: 15000 ms 00:17:47.526 Doorbell Stride: 4 bytes 00:17:47.526 NVM Subsystem Reset: Not Supported 00:17:47.526 Command Sets Supported 00:17:47.526 NVM Command Set: Supported 00:17:47.526 Boot Partition: Not Supported 00:17:47.526 Memory Page Size Minimum: 4096 bytes 00:17:47.526 Memory Page Size Maximum: 4096 bytes 00:17:47.526 Persistent Memory Region: Not Supported 00:17:47.526 Optional Asynchronous Events Supported 00:17:47.526 Namespace Attribute Notices: Supported 00:17:47.526 Firmware Activation Notices: Not Supported 00:17:47.526 ANA Change Notices: Not Supported 00:17:47.526 PLE Aggregate Log Change Notices: Not Supported 00:17:47.526 LBA Status Info Alert Notices: Not Supported 00:17:47.526 EGE Aggregate Log Change Notices: Not Supported 00:17:47.526 Normal NVM Subsystem Shutdown event: Not Supported 00:17:47.526 Zone Descriptor Change Notices: Not Supported 00:17:47.526 Discovery Log Change Notices: Not Supported 00:17:47.526 Controller Attributes 00:17:47.526 128-bit Host Identifier: Supported 00:17:47.526 Non-Operational Permissive Mode: Not Supported 00:17:47.526 NVM Sets: Not Supported 00:17:47.526 Read Recovery Levels: Not Supported 00:17:47.526 Endurance Groups: Not Supported 00:17:47.526 Predictable Latency Mode: Not Supported 00:17:47.526 Traffic Based Keep ALive: Not Supported 00:17:47.526 Namespace Granularity: Not Supported 00:17:47.526 SQ Associations: Not Supported 00:17:47.526 UUID List: Not Supported 00:17:47.526 Multi-Domain Subsystem: Not Supported 00:17:47.526 Fixed Capacity Management: Not Supported 00:17:47.526 Variable Capacity Management: Not Supported 00:17:47.526 Delete Endurance Group: Not Supported 00:17:47.526 Delete NVM Set: Not Supported 00:17:47.526 Extended LBA Formats Supported: Not Supported 00:17:47.526 Flexible Data Placement Supported: Not Supported 00:17:47.526 00:17:47.526 Controller Memory Buffer Support 00:17:47.526 ================================ 00:17:47.526 Supported: No 00:17:47.526 00:17:47.526 Persistent Memory Region Support 00:17:47.526 ================================ 00:17:47.526 Supported: No 00:17:47.526 00:17:47.526 Admin Command Set Attributes 00:17:47.526 ============================ 00:17:47.526 Security Send/Receive: Not Supported 00:17:47.526 Format NVM: Not Supported 00:17:47.526 Firmware Activate/Download: Not Supported 00:17:47.526 Namespace Management: Not Supported 00:17:47.526 Device Self-Test: Not Supported 00:17:47.526 Directives: Not Supported 00:17:47.526 NVMe-MI: Not Supported 00:17:47.526 Virtualization Management: Not Supported 00:17:47.526 Doorbell Buffer Config: Not Supported 00:17:47.526 Get LBA Status Capability: Not Supported 00:17:47.526 Command & Feature Lockdown Capability: Not Supported 00:17:47.526 Abort Command Limit: 4 00:17:47.526 Async Event Request Limit: 4 00:17:47.526 Number of Firmware Slots: N/A 00:17:47.526 Firmware Slot 1 Read-Only: N/A 00:17:47.526 Firmware Activation Without Reset: N/A 00:17:47.526 Multiple Update Detection Support: N/A 00:17:47.526 Firmware Update Granularity: No Information Provided 00:17:47.526 Per-Namespace SMART Log: No 00:17:47.526 Asymmetric Namespace Access Log Page: Not Supported 00:17:47.526 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:47.526 Command Effects Log Page: Supported 00:17:47.526 Get Log Page Extended Data: Supported 00:17:47.526 Telemetry Log Pages: Not Supported 00:17:47.526 Persistent Event Log Pages: Not Supported 00:17:47.526 Supported Log Pages Log Page: May Support 00:17:47.526 Commands Supported & Effects Log Page: Not Supported 00:17:47.526 Feature Identifiers & Effects Log Page:May Support 00:17:47.526 NVMe-MI Commands & Effects Log Page: May Support 00:17:47.526 Data Area 4 for Telemetry Log: Not Supported 00:17:47.526 Error Log Page Entries Supported: 128 00:17:47.526 Keep Alive: Supported 00:17:47.526 Keep Alive Granularity: 10000 ms 00:17:47.526 00:17:47.526 NVM Command Set Attributes 00:17:47.526 ========================== 00:17:47.526 Submission Queue Entry Size 00:17:47.526 Max: 64 00:17:47.526 Min: 64 00:17:47.526 Completion Queue Entry Size 00:17:47.526 Max: 16 00:17:47.526 Min: 16 00:17:47.526 Number of Namespaces: 32 00:17:47.526 Compare Command: Supported 00:17:47.526 Write Uncorrectable Command: Not Supported 00:17:47.526 Dataset Management Command: Supported 00:17:47.526 Write Zeroes Command: Supported 00:17:47.526 Set Features Save Field: Not Supported 00:17:47.526 Reservations: Supported 00:17:47.526 Timestamp: Not Supported 00:17:47.526 Copy: Supported 00:17:47.526 Volatile Write Cache: Present 00:17:47.526 Atomic Write Unit (Normal): 1 00:17:47.526 Atomic Write Unit (PFail): 1 00:17:47.526 Atomic Compare & Write Unit: 1 00:17:47.526 Fused Compare & Write: Supported 00:17:47.526 Scatter-Gather List 00:17:47.526 SGL Command Set: Supported 00:17:47.526 SGL Keyed: Supported 00:17:47.526 SGL Bit Bucket Descriptor: Not Supported 00:17:47.526 SGL Metadata Pointer: Not Supported 00:17:47.526 Oversized SGL: Not Supported 00:17:47.526 SGL Metadata Address: Not Supported 00:17:47.526 SGL Offset: Supported 00:17:47.526 Transport SGL Data Block: Not Supported 00:17:47.526 Replay Protected Memory Block: Not Supported 00:17:47.526 00:17:47.526 Firmware Slot Information 00:17:47.526 ========================= 00:17:47.526 Active slot: 1 00:17:47.526 Slot 1 Firmware Revision: 24.05 00:17:47.526 00:17:47.526 00:17:47.526 Commands Supported and Effects 00:17:47.526 ============================== 00:17:47.526 Admin Commands 00:17:47.526 -------------- 00:17:47.526 Get Log Page (02h): Supported 00:17:47.526 Identify (06h): Supported 00:17:47.526 Abort (08h): Supported 00:17:47.526 Set Features (09h): Supported 00:17:47.526 Get Features (0Ah): Supported 00:17:47.526 Asynchronous Event Request (0Ch): Supported 00:17:47.526 Keep Alive (18h): Supported 00:17:47.526 I/O Commands 00:17:47.526 ------------ 00:17:47.526 Flush (00h): Supported LBA-Change 00:17:47.526 Write (01h): Supported LBA-Change 00:17:47.526 Read (02h): Supported 00:17:47.526 Compare (05h): Supported 00:17:47.526 Write Zeroes (08h): Supported LBA-Change 00:17:47.526 Dataset Management (09h): Supported LBA-Change 00:17:47.526 Copy (19h): Supported LBA-Change 00:17:47.526 Unknown (79h): Supported LBA-Change 00:17:47.526 Unknown (7Ah): Supported 00:17:47.526 00:17:47.526 Error Log 00:17:47.526 ========= 00:17:47.526 00:17:47.526 Arbitration 00:17:47.526 =========== 00:17:47.526 Arbitration Burst: 1 00:17:47.526 00:17:47.526 Power Management 00:17:47.526 ================ 00:17:47.526 Number of Power States: 1 00:17:47.526 Current Power State: Power State #0 00:17:47.526 Power State #0: 00:17:47.526 Max Power: 0.00 W 00:17:47.526 Non-Operational State: Operational 00:17:47.526 Entry Latency: Not Reported 00:17:47.526 Exit Latency: Not Reported 00:17:47.526 Relative Read Throughput: 0 00:17:47.526 Relative Read Latency: 0 00:17:47.526 Relative Write Throughput: 0 00:17:47.526 Relative Write Latency: 0 00:17:47.526 Idle Power: Not Reported 00:17:47.526 Active Power: Not Reported 00:17:47.526 Non-Operational Permissive Mode: Not Supported 00:17:47.526 00:17:47.526 Health Information 00:17:47.526 ================== 00:17:47.526 Critical Warnings: 00:17:47.526 Available Spare Space: OK 00:17:47.526 Temperature: OK 00:17:47.526 Device Reliability: OK 00:17:47.526 Read Only: No 00:17:47.526 Volatile Memory Backup: OK 00:17:47.526 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:47.526 Temperature Threshold: [2024-05-15 01:04:59.757151] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.526 [2024-05-15 01:04:59.757163] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x13cdc80) 00:17:47.526 [2024-05-15 01:04:59.757174] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.526 [2024-05-15 01:04:59.757198] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142d7e0, cid 7, qid 0 00:17:47.526 [2024-05-15 01:04:59.757435] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.526 [2024-05-15 01:04:59.757451] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.526 [2024-05-15 01:04:59.757458] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.526 [2024-05-15 01:04:59.757465] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142d7e0) on tqpair=0x13cdc80 00:17:47.526 [2024-05-15 01:04:59.757508] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:17:47.526 [2024-05-15 01:04:59.757546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.526 [2024-05-15 01:04:59.757559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.527 [2024-05-15 01:04:59.757569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.527 [2024-05-15 01:04:59.757578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.527 [2024-05-15 01:04:59.757590] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.757598] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.757604] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cdc80) 00:17:47.527 [2024-05-15 01:04:59.757630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.527 [2024-05-15 01:04:59.757652] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142d260, cid 3, qid 0 00:17:47.527 [2024-05-15 01:04:59.757904] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.527 [2024-05-15 01:04:59.757920] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.527 [2024-05-15 01:04:59.757927] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.757943] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142d260) on tqpair=0x13cdc80 00:17:47.527 [2024-05-15 01:04:59.757960] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.757969] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.757975] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cdc80) 00:17:47.527 [2024-05-15 01:04:59.757986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.527 [2024-05-15 01:04:59.758015] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142d260, cid 3, qid 0 00:17:47.527 [2024-05-15 01:04:59.758245] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.527 [2024-05-15 01:04:59.758261] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.527 [2024-05-15 01:04:59.758268] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.758275] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142d260) on tqpair=0x13cdc80 00:17:47.527 [2024-05-15 01:04:59.758284] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:17:47.527 [2024-05-15 01:04:59.758292] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:17:47.527 [2024-05-15 01:04:59.758310] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.758321] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.758343] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cdc80) 00:17:47.527 [2024-05-15 01:04:59.758354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.527 [2024-05-15 01:04:59.758374] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142d260, cid 3, qid 0 00:17:47.527 [2024-05-15 01:04:59.758625] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.527 [2024-05-15 01:04:59.758641] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.527 [2024-05-15 01:04:59.758648] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.758655] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142d260) on tqpair=0x13cdc80 00:17:47.527 [2024-05-15 01:04:59.758675] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.758690] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.758698] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cdc80) 00:17:47.527 [2024-05-15 01:04:59.758724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.527 [2024-05-15 01:04:59.758745] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142d260, cid 3, qid 0 00:17:47.527 [2024-05-15 01:04:59.758994] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.527 [2024-05-15 01:04:59.759011] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.527 [2024-05-15 01:04:59.759018] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.759024] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142d260) on tqpair=0x13cdc80 00:17:47.527 [2024-05-15 01:04:59.759045] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.759056] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.759062] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cdc80) 00:17:47.527 [2024-05-15 01:04:59.759073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.527 [2024-05-15 01:04:59.759095] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142d260, cid 3, qid 0 00:17:47.527 [2024-05-15 01:04:59.759288] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.527 [2024-05-15 01:04:59.759306] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.527 [2024-05-15 01:04:59.759318] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.759326] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142d260) on tqpair=0x13cdc80 00:17:47.527 [2024-05-15 01:04:59.759345] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.759357] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.759364] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cdc80) 00:17:47.527 [2024-05-15 01:04:59.759375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.527 [2024-05-15 01:04:59.759396] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142d260, cid 3, qid 0 00:17:47.527 [2024-05-15 01:04:59.759617] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.527 [2024-05-15 01:04:59.759633] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.527 [2024-05-15 01:04:59.759640] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.759647] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142d260) on tqpair=0x13cdc80 00:17:47.527 [2024-05-15 01:04:59.759667] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.759678] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.759685] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cdc80) 00:17:47.527 [2024-05-15 01:04:59.759695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.527 [2024-05-15 01:04:59.759731] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142d260, cid 3, qid 0 00:17:47.527 [2024-05-15 01:04:59.760001] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.527 [2024-05-15 01:04:59.760018] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.527 [2024-05-15 01:04:59.760024] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.760031] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142d260) on tqpair=0x13cdc80 00:17:47.527 [2024-05-15 01:04:59.760051] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.760062] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.760068] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cdc80) 00:17:47.527 [2024-05-15 01:04:59.760079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.527 [2024-05-15 01:04:59.760100] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142d260, cid 3, qid 0 00:17:47.527 [2024-05-15 01:04:59.760318] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.527 [2024-05-15 01:04:59.760335] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.527 [2024-05-15 01:04:59.760341] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.760348] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142d260) on tqpair=0x13cdc80 00:17:47.527 [2024-05-15 01:04:59.760367] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.760379] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.760386] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cdc80) 00:17:47.527 [2024-05-15 01:04:59.760396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.527 [2024-05-15 01:04:59.760432] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142d260, cid 3, qid 0 00:17:47.527 [2024-05-15 01:04:59.760675] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.527 [2024-05-15 01:04:59.760691] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.527 [2024-05-15 01:04:59.760698] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.760709] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142d260) on tqpair=0x13cdc80 00:17:47.527 [2024-05-15 01:04:59.760729] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.760744] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.760756] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cdc80) 00:17:47.527 [2024-05-15 01:04:59.760767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.527 [2024-05-15 01:04:59.760788] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142d260, cid 3, qid 0 00:17:47.527 [2024-05-15 01:04:59.764154] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.527 [2024-05-15 01:04:59.764172] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.527 [2024-05-15 01:04:59.764179] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.764186] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142d260) on tqpair=0x13cdc80 00:17:47.527 [2024-05-15 01:04:59.764207] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.764233] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:47.527 [2024-05-15 01:04:59.764240] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13cdc80) 00:17:47.528 [2024-05-15 01:04:59.764250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.528 [2024-05-15 01:04:59.764272] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x142d260, cid 3, qid 0 00:17:47.528 [2024-05-15 01:04:59.764486] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:47.528 [2024-05-15 01:04:59.764503] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:47.528 [2024-05-15 01:04:59.764510] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:47.528 [2024-05-15 01:04:59.764516] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x142d260) on tqpair=0x13cdc80 00:17:47.528 [2024-05-15 01:04:59.764532] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:17:47.528 0 Kelvin (-273 Celsius) 00:17:47.528 Available Spare: 0% 00:17:47.528 Available Spare Threshold: 0% 00:17:47.528 Life Percentage Used: 0% 00:17:47.528 Data Units Read: 0 00:17:47.528 Data Units Written: 0 00:17:47.528 Host Read Commands: 0 00:17:47.528 Host Write Commands: 0 00:17:47.528 Controller Busy Time: 0 minutes 00:17:47.528 Power Cycles: 0 00:17:47.528 Power On Hours: 0 hours 00:17:47.528 Unsafe Shutdowns: 0 00:17:47.528 Unrecoverable Media Errors: 0 00:17:47.528 Lifetime Error Log Entries: 0 00:17:47.528 Warning Temperature Time: 0 minutes 00:17:47.528 Critical Temperature Time: 0 minutes 00:17:47.528 00:17:47.528 Number of Queues 00:17:47.528 ================ 00:17:47.528 Number of I/O Submission Queues: 127 00:17:47.528 Number of I/O Completion Queues: 127 00:17:47.528 00:17:47.528 Active Namespaces 00:17:47.528 ================= 00:17:47.528 Namespace ID:1 00:17:47.528 Error Recovery Timeout: Unlimited 00:17:47.528 Command Set Identifier: NVM (00h) 00:17:47.528 Deallocate: Supported 00:17:47.528 Deallocated/Unwritten Error: Not Supported 00:17:47.528 Deallocated Read Value: Unknown 00:17:47.528 Deallocate in Write Zeroes: Not Supported 00:17:47.528 Deallocated Guard Field: 0xFFFF 00:17:47.528 Flush: Supported 00:17:47.528 Reservation: Supported 00:17:47.528 Namespace Sharing Capabilities: Multiple Controllers 00:17:47.528 Size (in LBAs): 131072 (0GiB) 00:17:47.528 Capacity (in LBAs): 131072 (0GiB) 00:17:47.528 Utilization (in LBAs): 131072 (0GiB) 00:17:47.528 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:47.528 EUI64: ABCDEF0123456789 00:17:47.528 UUID: 17133876-65b2-4ab9-a8eb-41f17de4347a 00:17:47.528 Thin Provisioning: Not Supported 00:17:47.528 Per-NS Atomic Units: Yes 00:17:47.528 Atomic Boundary Size (Normal): 0 00:17:47.528 Atomic Boundary Size (PFail): 0 00:17:47.528 Atomic Boundary Offset: 0 00:17:47.528 Maximum Single Source Range Length: 65535 00:17:47.528 Maximum Copy Length: 65535 00:17:47.528 Maximum Source Range Count: 1 00:17:47.528 NGUID/EUI64 Never Reused: No 00:17:47.528 Namespace Write Protected: No 00:17:47.528 Number of LBA Formats: 1 00:17:47.528 Current LBA Format: LBA Format #00 00:17:47.528 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:47.528 00:17:47.528 01:04:59 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:47.528 01:04:59 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:47.528 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.528 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:47.528 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.528 01:04:59 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:47.528 01:04:59 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:47.528 01:04:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:47.528 01:04:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:17:47.528 01:04:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:47.528 01:04:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:17:47.528 01:04:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:47.528 01:04:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:47.528 rmmod nvme_tcp 00:17:47.528 rmmod nvme_fabrics 00:17:47.528 rmmod nvme_keyring 00:17:47.528 01:04:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:47.528 01:04:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:17:47.528 01:04:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:17:47.528 01:04:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1291120 ']' 00:17:47.528 01:04:59 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1291120 00:17:47.528 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 1291120 ']' 00:17:47.528 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 1291120 00:17:47.528 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:17:47.528 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:47.528 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1291120 00:17:47.528 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:47.528 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:47.528 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1291120' 00:17:47.528 killing process with pid 1291120 00:17:47.528 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 1291120 00:17:47.528 [2024-05-15 01:04:59.864126] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:47.528 01:04:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 1291120 00:17:47.788 01:05:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:47.788 01:05:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:47.788 01:05:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:47.788 01:05:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:47.788 01:05:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:47.788 01:05:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.788 01:05:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:47.788 01:05:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.322 01:05:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:50.322 00:17:50.322 real 0m6.827s 00:17:50.322 user 0m8.178s 00:17:50.322 sys 0m2.324s 00:17:50.322 01:05:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:50.322 01:05:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:50.322 ************************************ 00:17:50.322 END TEST nvmf_identify 00:17:50.322 ************************************ 00:17:50.322 01:05:02 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:50.322 01:05:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:50.322 01:05:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:50.322 01:05:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:50.322 ************************************ 00:17:50.322 START TEST nvmf_perf 00:17:50.322 ************************************ 00:17:50.322 01:05:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:50.322 * Looking for test storage... 00:17:50.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:50.322 01:05:02 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:50.322 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:50.322 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:50.322 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:50.322 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:50.322 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:50.322 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:50.322 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:50.322 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:50.322 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:50.322 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:50.322 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:50.322 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:50.322 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:50.322 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:50.322 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:50.322 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:50.322 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:50.322 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:50.322 01:05:02 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:50.322 01:05:02 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:50.323 01:05:02 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:50.323 01:05:02 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.323 01:05:02 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.323 01:05:02 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.323 01:05:02 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:50.323 01:05:02 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.323 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:17:50.323 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:50.323 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:50.323 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:50.323 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:50.323 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:50.323 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:50.323 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:50.323 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:50.323 01:05:02 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:50.323 01:05:02 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:50.323 01:05:02 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:50.323 01:05:02 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:50.323 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:50.323 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:50.323 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:50.323 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:50.323 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:50.323 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.323 01:05:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.323 01:05:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.323 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:50.323 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:50.323 01:05:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:17:50.323 01:05:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:52.854 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:52.854 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:52.855 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:52.855 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:52.855 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:52.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:52.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:17:52.855 00:17:52.855 --- 10.0.0.2 ping statistics --- 00:17:52.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.855 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:52.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:52.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:17:52.855 00:17:52.855 --- 10.0.0.1 ping statistics --- 00:17:52.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.855 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1293625 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1293625 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 1293625 ']' 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:52.855 01:05:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:52.855 [2024-05-15 01:05:04.942425] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:17:52.855 [2024-05-15 01:05:04.942522] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.855 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.855 [2024-05-15 01:05:05.022817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:52.855 [2024-05-15 01:05:05.139725] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.855 [2024-05-15 01:05:05.139795] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.855 [2024-05-15 01:05:05.139811] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:52.855 [2024-05-15 01:05:05.139825] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:52.855 [2024-05-15 01:05:05.139836] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.855 [2024-05-15 01:05:05.139928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.855 [2024-05-15 01:05:05.140010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.855 [2024-05-15 01:05:05.140102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:52.855 [2024-05-15 01:05:05.140105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.823 01:05:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:53.823 01:05:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:17:53.823 01:05:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:53.823 01:05:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:53.823 01:05:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:53.823 01:05:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.823 01:05:05 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:17:53.823 01:05:05 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:17:57.104 01:05:08 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:17:57.104 01:05:08 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:57.104 01:05:09 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:17:57.104 01:05:09 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:57.361 01:05:09 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:57.361 01:05:09 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:17:57.361 01:05:09 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:57.361 01:05:09 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:57.361 01:05:09 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:57.361 [2024-05-15 01:05:09.717048] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:57.361 01:05:09 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:57.619 01:05:09 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:57.619 01:05:09 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:57.876 01:05:10 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:57.876 01:05:10 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:58.135 01:05:10 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:58.392 [2024-05-15 01:05:10.728652] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:58.392 [2024-05-15 01:05:10.728995] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.392 01:05:10 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:58.649 01:05:10 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:17:58.649 01:05:10 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:17:58.649 01:05:10 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:58.649 01:05:10 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:18:00.023 Initializing NVMe Controllers 00:18:00.023 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:18:00.023 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:18:00.023 Initialization complete. Launching workers. 00:18:00.023 ======================================================== 00:18:00.023 Latency(us) 00:18:00.023 Device Information : IOPS MiB/s Average min max 00:18:00.023 PCIE (0000:88:00.0) NSID 1 from core 0: 84819.39 331.33 376.71 33.43 8256.20 00:18:00.023 ======================================================== 00:18:00.023 Total : 84819.39 331.33 376.71 33.43 8256.20 00:18:00.023 00:18:00.023 01:05:12 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:00.023 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.396 Initializing NVMe Controllers 00:18:01.396 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:01.396 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:01.396 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:01.396 Initialization complete. Launching workers. 00:18:01.396 ======================================================== 00:18:01.396 Latency(us) 00:18:01.396 Device Information : IOPS MiB/s Average min max 00:18:01.396 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 55.80 0.22 18489.80 218.93 45708.32 00:18:01.396 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 41.85 0.16 25032.84 6983.34 48878.67 00:18:01.396 ======================================================== 00:18:01.396 Total : 97.65 0.38 21293.96 218.93 48878.67 00:18:01.396 00:18:01.396 01:05:13 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:01.396 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.768 Initializing NVMe Controllers 00:18:02.768 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:02.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:02.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:02.768 Initialization complete. Launching workers. 00:18:02.768 ======================================================== 00:18:02.768 Latency(us) 00:18:02.768 Device Information : IOPS MiB/s Average min max 00:18:02.768 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7873.14 30.75 4064.47 594.01 8128.36 00:18:02.768 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3831.36 14.97 8378.60 5545.78 16947.65 00:18:02.768 ======================================================== 00:18:02.768 Total : 11704.50 45.72 5476.66 594.01 16947.65 00:18:02.768 00:18:02.768 01:05:15 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:18:02.768 01:05:15 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:18:02.768 01:05:15 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:03.026 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.554 Initializing NVMe Controllers 00:18:05.554 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:05.554 Controller IO queue size 128, less than required. 00:18:05.554 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:05.554 Controller IO queue size 128, less than required. 00:18:05.554 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:05.554 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:05.554 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:05.554 Initialization complete. Launching workers. 00:18:05.554 ======================================================== 00:18:05.554 Latency(us) 00:18:05.554 Device Information : IOPS MiB/s Average min max 00:18:05.554 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 818.49 204.62 163007.58 80115.49 253545.44 00:18:05.554 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 620.00 155.00 215411.72 106772.68 311381.50 00:18:05.554 ======================================================== 00:18:05.554 Total : 1438.49 359.62 185594.00 80115.49 311381.50 00:18:05.554 00:18:05.554 01:05:17 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:18:05.554 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.813 No valid NVMe controllers or AIO or URING devices found 00:18:05.813 Initializing NVMe Controllers 00:18:05.813 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:05.813 Controller IO queue size 128, less than required. 00:18:05.813 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:05.813 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:18:05.813 Controller IO queue size 128, less than required. 00:18:05.813 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:05.813 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:18:05.813 WARNING: Some requested NVMe devices were skipped 00:18:05.813 01:05:17 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:18:05.813 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.343 Initializing NVMe Controllers 00:18:08.343 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:08.343 Controller IO queue size 128, less than required. 00:18:08.343 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:08.343 Controller IO queue size 128, less than required. 00:18:08.343 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:08.343 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:08.343 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:08.343 Initialization complete. Launching workers. 00:18:08.343 00:18:08.343 ==================== 00:18:08.343 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:18:08.343 TCP transport: 00:18:08.343 polls: 31376 00:18:08.343 idle_polls: 9770 00:18:08.344 sock_completions: 21606 00:18:08.344 nvme_completions: 3231 00:18:08.344 submitted_requests: 4818 00:18:08.344 queued_requests: 1 00:18:08.344 00:18:08.344 ==================== 00:18:08.344 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:18:08.344 TCP transport: 00:18:08.344 polls: 34018 00:18:08.344 idle_polls: 10910 00:18:08.344 sock_completions: 23108 00:18:08.344 nvme_completions: 3411 00:18:08.344 submitted_requests: 5138 00:18:08.344 queued_requests: 1 00:18:08.344 ======================================================== 00:18:08.344 Latency(us) 00:18:08.344 Device Information : IOPS MiB/s Average min max 00:18:08.344 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 807.50 201.87 163244.26 93425.55 229853.22 00:18:08.344 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 852.49 213.12 155331.31 71062.67 215647.97 00:18:08.344 ======================================================== 00:18:08.344 Total : 1659.99 415.00 159180.53 71062.67 229853.22 00:18:08.344 00:18:08.344 01:05:20 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:18:08.344 01:05:20 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:08.602 01:05:20 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:18:08.602 01:05:20 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:08.602 01:05:20 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:18:08.602 01:05:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:08.602 01:05:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:18:08.602 01:05:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:08.602 01:05:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:18:08.602 01:05:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:08.602 01:05:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:08.602 rmmod nvme_tcp 00:18:08.602 rmmod nvme_fabrics 00:18:08.602 rmmod nvme_keyring 00:18:08.602 01:05:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:08.602 01:05:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:18:08.602 01:05:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:18:08.602 01:05:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1293625 ']' 00:18:08.602 01:05:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1293625 00:18:08.602 01:05:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 1293625 ']' 00:18:08.602 01:05:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 1293625 00:18:08.602 01:05:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:18:08.602 01:05:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:08.602 01:05:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1293625 00:18:08.602 01:05:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:08.602 01:05:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:08.602 01:05:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1293625' 00:18:08.602 killing process with pid 1293625 00:18:08.602 01:05:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 1293625 00:18:08.602 [2024-05-15 01:05:20.829767] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:08.602 01:05:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 1293625 00:18:10.505 01:05:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:10.505 01:05:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:10.505 01:05:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:10.505 01:05:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:10.505 01:05:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:10.505 01:05:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.505 01:05:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:10.505 01:05:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.449 01:05:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:12.449 00:18:12.449 real 0m22.208s 00:18:12.449 user 1m6.876s 00:18:12.449 sys 0m5.367s 00:18:12.449 01:05:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:12.449 01:05:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:12.449 ************************************ 00:18:12.449 END TEST nvmf_perf 00:18:12.449 ************************************ 00:18:12.449 01:05:24 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:12.449 01:05:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:12.449 01:05:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:12.449 01:05:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:12.449 ************************************ 00:18:12.449 START TEST nvmf_fio_host 00:18:12.449 ************************************ 00:18:12.449 01:05:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:12.449 * Looking for test storage... 00:18:12.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:12.449 01:05:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:12.449 01:05:24 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:12.449 01:05:24 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:12.449 01:05:24 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:12.449 01:05:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.449 01:05:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.449 01:05:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.449 01:05:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:12.449 01:05:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.449 01:05:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:12.449 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:18:12.449 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:12.449 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:12.449 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:12.449 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:12.449 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:12.449 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:18:12.450 01:05:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.981 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:14.981 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:14.982 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:14.982 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:14.982 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:14.982 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:14.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:14.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:18:14.982 00:18:14.982 --- 10.0.0.2 ping statistics --- 00:18:14.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.982 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:14.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:14.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:18:14.982 00:18:14.982 --- 10.0.0.1 ping statistics --- 00:18:14.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.982 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=1298008 00:18:14.982 01:05:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:14.983 01:05:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:14.983 01:05:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 1298008 00:18:14.983 01:05:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 1298008 ']' 00:18:14.983 01:05:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.983 01:05:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:14.983 01:05:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.983 01:05:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:14.983 01:05:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.983 [2024-05-15 01:05:27.307472] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:18:14.983 [2024-05-15 01:05:27.307557] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.983 EAL: No free 2048 kB hugepages reported on node 1 00:18:15.241 [2024-05-15 01:05:27.394254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:15.241 [2024-05-15 01:05:27.510970] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.241 [2024-05-15 01:05:27.511042] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.241 [2024-05-15 01:05:27.511056] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.241 [2024-05-15 01:05:27.511069] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.241 [2024-05-15 01:05:27.511080] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.241 [2024-05-15 01:05:27.511142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.241 [2024-05-15 01:05:27.511191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:15.241 [2024-05-15 01:05:27.511239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:15.241 [2024-05-15 01:05:27.511242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.176 [2024-05-15 01:05:28.310030] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.176 Malloc1 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.176 [2024-05-15 01:05:28.391439] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:16.176 [2024-05-15 01:05:28.391753] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:18:16.176 01:05:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:16.434 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:16.434 fio-3.35 00:18:16.434 Starting 1 thread 00:18:16.434 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.971 00:18:18.971 test: (groupid=0, jobs=1): err= 0: pid=1298237: Wed May 15 01:05:30 2024 00:18:18.971 read: IOPS=9135, BW=35.7MiB/s (37.4MB/s)(71.6MiB/2006msec) 00:18:18.971 slat (nsec): min=1860, max=133629, avg=2462.71, stdev=1629.12 00:18:18.971 clat (usec): min=3454, max=13794, avg=7752.93, stdev=570.33 00:18:18.971 lat (usec): min=3480, max=13796, avg=7755.39, stdev=570.24 00:18:18.971 clat percentiles (usec): 00:18:18.971 | 1.00th=[ 6456], 5.00th=[ 6849], 10.00th=[ 7111], 20.00th=[ 7308], 00:18:18.971 | 30.00th=[ 7504], 40.00th=[ 7635], 50.00th=[ 7767], 60.00th=[ 7898], 00:18:18.971 | 70.00th=[ 8029], 80.00th=[ 8160], 90.00th=[ 8455], 95.00th=[ 8586], 00:18:18.971 | 99.00th=[ 8979], 99.50th=[ 9241], 99.90th=[11863], 99.95th=[12780], 00:18:18.971 | 99.99th=[13829] 00:18:18.971 bw ( KiB/s): min=35728, max=37176, per=99.89%, avg=36504.00, stdev=602.75, samples=4 00:18:18.971 iops : min= 8932, max= 9294, avg=9126.00, stdev=150.69, samples=4 00:18:18.971 write: IOPS=9147, BW=35.7MiB/s (37.5MB/s)(71.7MiB/2006msec); 0 zone resets 00:18:18.971 slat (usec): min=2, max=117, avg= 2.62, stdev= 1.38 00:18:18.971 clat (usec): min=1270, max=10787, avg=6205.96, stdev=497.58 00:18:18.971 lat (usec): min=1277, max=10790, avg=6208.57, stdev=497.57 00:18:18.971 clat percentiles (usec): 00:18:18.972 | 1.00th=[ 5014], 5.00th=[ 5473], 10.00th=[ 5604], 20.00th=[ 5800], 00:18:18.972 | 30.00th=[ 5997], 40.00th=[ 6128], 50.00th=[ 6194], 60.00th=[ 6325], 00:18:18.972 | 70.00th=[ 6456], 80.00th=[ 6587], 90.00th=[ 6783], 95.00th=[ 6915], 00:18:18.972 | 99.00th=[ 7308], 99.50th=[ 7439], 99.90th=[ 9634], 99.95th=[10290], 00:18:18.972 | 99.99th=[10683] 00:18:18.972 bw ( KiB/s): min=36488, max=36832, per=100.00%, avg=36594.00, stdev=160.98, samples=4 00:18:18.972 iops : min= 9122, max= 9208, avg=9148.50, stdev=40.25, samples=4 00:18:18.972 lat (msec) : 2=0.01%, 4=0.08%, 10=99.77%, 20=0.15% 00:18:18.972 cpu : usr=53.52%, sys=37.21%, ctx=61, majf=0, minf=5 00:18:18.972 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:18.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:18.972 issued rwts: total=18326,18350,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.972 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:18.972 00:18:18.972 Run status group 0 (all jobs): 00:18:18.972 READ: bw=35.7MiB/s (37.4MB/s), 35.7MiB/s-35.7MiB/s (37.4MB/s-37.4MB/s), io=71.6MiB (75.1MB), run=2006-2006msec 00:18:18.972 WRITE: bw=35.7MiB/s (37.5MB/s), 35.7MiB/s-35.7MiB/s (37.5MB/s-37.5MB/s), io=71.7MiB (75.2MB), run=2006-2006msec 00:18:18.972 01:05:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:18.972 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:18.972 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:18:18.972 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:18.972 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:18:18.972 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:18.972 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:18:18.972 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:18:18.972 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:18:18.972 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:18.972 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:18:18.972 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:18:18.972 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:18:18.972 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:18:18.972 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:18:18.972 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:18.972 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:18:18.972 01:05:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:18:18.972 01:05:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:18:18.972 01:05:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:18:18.972 01:05:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:18:18.972 01:05:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:18.972 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:18:18.972 fio-3.35 00:18:18.972 Starting 1 thread 00:18:18.972 EAL: No free 2048 kB hugepages reported on node 1 00:18:21.503 00:18:21.503 test: (groupid=0, jobs=1): err= 0: pid=1298573: Wed May 15 01:05:33 2024 00:18:21.503 read: IOPS=7850, BW=123MiB/s (129MB/s)(246MiB/2007msec) 00:18:21.503 slat (nsec): min=2803, max=91760, avg=3486.45, stdev=1468.86 00:18:21.503 clat (usec): min=3528, max=54589, avg=10048.98, stdev=4438.14 00:18:21.503 lat (usec): min=3531, max=54593, avg=10052.46, stdev=4438.21 00:18:21.503 clat percentiles (usec): 00:18:21.503 | 1.00th=[ 4686], 5.00th=[ 5800], 10.00th=[ 6456], 20.00th=[ 7373], 00:18:21.503 | 30.00th=[ 8225], 40.00th=[ 8848], 50.00th=[ 9634], 60.00th=[10421], 00:18:21.503 | 70.00th=[11207], 80.00th=[11994], 90.00th=[13173], 95.00th=[14353], 00:18:21.503 | 99.00th=[16909], 99.50th=[50070], 99.90th=[53740], 99.95th=[53740], 00:18:21.503 | 99.99th=[54264] 00:18:21.503 bw ( KiB/s): min=53888, max=68928, per=50.16%, avg=63008.00, stdev=6752.23, samples=4 00:18:21.503 iops : min= 3368, max= 4308, avg=3938.00, stdev=422.01, samples=4 00:18:21.503 write: IOPS=4470, BW=69.8MiB/s (73.2MB/s)(128MiB/1834msec); 0 zone resets 00:18:21.503 slat (usec): min=30, max=169, avg=33.24, stdev= 5.04 00:18:21.503 clat (usec): min=4514, max=20236, avg=11062.24, stdev=1956.42 00:18:21.503 lat (usec): min=4547, max=20267, avg=11095.48, stdev=1957.05 00:18:21.503 clat percentiles (usec): 00:18:21.503 | 1.00th=[ 7439], 5.00th=[ 8291], 10.00th=[ 8717], 20.00th=[ 9241], 00:18:21.503 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[10814], 60.00th=[11469], 00:18:21.503 | 70.00th=[12125], 80.00th=[12911], 90.00th=[13829], 95.00th=[14484], 00:18:21.503 | 99.00th=[15533], 99.50th=[15795], 99.90th=[19006], 99.95th=[19792], 00:18:21.503 | 99.99th=[20317] 00:18:21.503 bw ( KiB/s): min=55936, max=71776, per=91.38%, avg=65352.00, stdev=7194.82, samples=4 00:18:21.503 iops : min= 3496, max= 4486, avg=4084.50, stdev=449.68, samples=4 00:18:21.503 lat (msec) : 4=0.12%, 10=47.47%, 20=51.87%, 50=0.19%, 100=0.35% 00:18:21.503 cpu : usr=74.28%, sys=21.78%, ctx=27, majf=0, minf=1 00:18:21.503 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:21.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.503 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:21.503 issued rwts: total=15756,8198,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.503 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:21.503 00:18:21.503 Run status group 0 (all jobs): 00:18:21.503 READ: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=246MiB (258MB), run=2007-2007msec 00:18:21.503 WRITE: bw=69.8MiB/s (73.2MB/s), 69.8MiB/s-69.8MiB/s (73.2MB/s-73.2MB/s), io=128MiB (134MB), run=1834-1834msec 00:18:21.503 01:05:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:21.503 01:05:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.503 01:05:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.503 01:05:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.503 01:05:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:18:21.503 01:05:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:18:21.503 01:05:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:18:21.503 01:05:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:18:21.503 01:05:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:21.503 01:05:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:18:21.503 01:05:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:21.503 01:05:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:18:21.503 01:05:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:21.503 01:05:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:21.503 rmmod nvme_tcp 00:18:21.503 rmmod nvme_fabrics 00:18:21.503 rmmod nvme_keyring 00:18:21.503 01:05:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:21.503 01:05:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:18:21.503 01:05:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:18:21.503 01:05:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1298008 ']' 00:18:21.503 01:05:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1298008 00:18:21.503 01:05:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 1298008 ']' 00:18:21.503 01:05:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 1298008 00:18:21.503 01:05:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:18:21.503 01:05:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:21.503 01:05:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1298008 00:18:21.503 01:05:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:21.503 01:05:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:21.503 01:05:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1298008' 00:18:21.503 killing process with pid 1298008 00:18:21.503 01:05:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 1298008 00:18:21.503 [2024-05-15 01:05:33.607082] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:21.503 01:05:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 1298008 00:18:21.761 01:05:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:21.761 01:05:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:21.761 01:05:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:21.762 01:05:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:21.762 01:05:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:21.762 01:05:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:21.762 01:05:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:21.762 01:05:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.664 01:05:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:23.664 00:18:23.664 real 0m11.411s 00:18:23.664 user 0m29.243s 00:18:23.664 sys 0m4.052s 00:18:23.664 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:23.664 01:05:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.664 ************************************ 00:18:23.664 END TEST nvmf_fio_host 00:18:23.664 ************************************ 00:18:23.664 01:05:35 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:23.664 01:05:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:23.664 01:05:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:23.664 01:05:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:23.664 ************************************ 00:18:23.664 START TEST nvmf_failover 00:18:23.664 ************************************ 00:18:23.664 01:05:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:23.664 * Looking for test storage... 00:18:23.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:23.664 01:05:36 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:23.664 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:18:23.664 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:23.664 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:23.664 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:23.664 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:23.664 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:23.664 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:23.664 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:23.664 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:23.664 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:23.664 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:18:23.923 01:05:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:26.453 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:26.453 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:26.453 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:26.454 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:26.454 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:26.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:26.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:18:26.454 00:18:26.454 --- 10.0.0.2 ping statistics --- 00:18:26.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.454 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:26.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:26.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:18:26.454 00:18:26.454 --- 10.0.0.1 ping statistics --- 00:18:26.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.454 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1301177 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1301177 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 1301177 ']' 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:26.454 01:05:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:26.454 [2024-05-15 01:05:38.726681] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:18:26.454 [2024-05-15 01:05:38.726761] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.454 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.454 [2024-05-15 01:05:38.802333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:26.712 [2024-05-15 01:05:38.913368] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.712 [2024-05-15 01:05:38.913425] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.712 [2024-05-15 01:05:38.913438] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:26.712 [2024-05-15 01:05:38.913449] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:26.712 [2024-05-15 01:05:38.913458] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.712 [2024-05-15 01:05:38.913548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:26.712 [2024-05-15 01:05:38.913611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:26.712 [2024-05-15 01:05:38.913614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.646 01:05:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:27.646 01:05:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:18:27.646 01:05:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:27.646 01:05:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:27.646 01:05:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:27.646 01:05:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.646 01:05:39 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:27.646 [2024-05-15 01:05:39.907163] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:27.646 01:05:39 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:27.904 Malloc0 00:18:27.904 01:05:40 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:28.162 01:05:40 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:28.419 01:05:40 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:28.676 [2024-05-15 01:05:40.958766] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:28.676 [2024-05-15 01:05:40.959098] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.677 01:05:40 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:28.936 [2024-05-15 01:05:41.203775] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:28.936 01:05:41 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:29.236 [2024-05-15 01:05:41.448572] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:18:29.236 01:05:41 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1301471 00:18:29.236 01:05:41 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:18:29.236 01:05:41 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:29.236 01:05:41 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1301471 /var/tmp/bdevperf.sock 00:18:29.236 01:05:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 1301471 ']' 00:18:29.236 01:05:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:29.236 01:05:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:29.236 01:05:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:29.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:29.236 01:05:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:29.236 01:05:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:29.495 01:05:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:29.495 01:05:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:18:29.495 01:05:41 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:30.060 NVMe0n1 00:18:30.060 01:05:42 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:30.626 00:18:30.626 01:05:42 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1301692 00:18:30.626 01:05:42 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:30.626 01:05:42 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:18:31.566 01:05:43 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:31.824 [2024-05-15 01:05:44.033962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5bf0 is same with the state(5) to be set 00:18:31.824 [2024-05-15 01:05:44.034046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5bf0 is same with the state(5) to be set 00:18:31.824 [2024-05-15 01:05:44.034076] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5bf0 is same with the state(5) to be set 00:18:31.824 [2024-05-15 01:05:44.034098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5bf0 is same with the state(5) to be set 00:18:31.824 [2024-05-15 01:05:44.034118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5bf0 is same with the state(5) to be set 00:18:31.824 [2024-05-15 01:05:44.034138] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5bf0 is same with the state(5) to be set 00:18:31.824 [2024-05-15 01:05:44.034158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5bf0 is same with the state(5) to be set 00:18:31.824 [2024-05-15 01:05:44.034180] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e5bf0 is same with the state(5) to be set 00:18:31.824 01:05:44 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:18:35.105 01:05:47 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:35.105 00:18:35.105 01:05:47 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:35.364 [2024-05-15 01:05:47.703432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703516] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703539] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703551] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703563] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703658] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703679] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703724] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703757] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703805] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703816] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703861] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703873] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703908] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703920] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703970] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.703994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.704006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.704018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.704029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.704040] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.704052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.704063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.704075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.704087] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.704098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.704110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.704121] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.704139] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.704152] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.704163] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.704174] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.704186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.704197] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.704209] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.704220] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.704231] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.704259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.704269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.704280] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.704292] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 [2024-05-15 01:05:47.704303] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6420 is same with the state(5) to be set 00:18:35.364 01:05:47 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:18:38.646 01:05:50 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:38.646 [2024-05-15 01:05:50.962621] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.646 01:05:50 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:18:40.018 01:05:51 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:40.018 [2024-05-15 01:05:52.269174] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.018 [2024-05-15 01:05:52.269246] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.018 [2024-05-15 01:05:52.269260] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.018 [2024-05-15 01:05:52.269273] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.018 [2024-05-15 01:05:52.269285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.018 [2024-05-15 01:05:52.269297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.018 [2024-05-15 01:05:52.269309] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.018 [2024-05-15 01:05:52.269321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269428] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269440] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269452] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269477] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269488] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269551] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269563] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269622] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269676] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269731] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269771] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269784] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269796] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269809] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269834] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269845] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269868] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269879] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269905] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269917] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269937] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269964] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269988] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.269999] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.270010] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.270022] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 [2024-05-15 01:05:52.270034] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168bef0 is same with the state(5) to be set 00:18:40.019 01:05:52 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1301692 00:18:46.582 0 00:18:46.582 01:05:57 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1301471 00:18:46.582 01:05:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 1301471 ']' 00:18:46.582 01:05:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 1301471 00:18:46.582 01:05:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:18:46.582 01:05:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:46.582 01:05:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1301471 00:18:46.582 01:05:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:46.582 01:05:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:46.582 01:05:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1301471' 00:18:46.582 killing process with pid 1301471 00:18:46.582 01:05:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 1301471 00:18:46.582 01:05:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 1301471 00:18:46.582 01:05:58 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:46.582 [2024-05-15 01:05:41.510612] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:18:46.582 [2024-05-15 01:05:41.510706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1301471 ] 00:18:46.582 EAL: No free 2048 kB hugepages reported on node 1 00:18:46.582 [2024-05-15 01:05:41.581808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.582 [2024-05-15 01:05:41.694103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.582 Running I/O for 15 seconds... 00:18:46.582 [2024-05-15 01:05:44.036548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.582 [2024-05-15 01:05:44.036590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.582 [2024-05-15 01:05:44.036622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.582 [2024-05-15 01:05:44.036638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.582 [2024-05-15 01:05:44.036653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.582 [2024-05-15 01:05:44.036668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.582 [2024-05-15 01:05:44.036684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.582 [2024-05-15 01:05:44.036697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.582 [2024-05-15 01:05:44.036712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.582 [2024-05-15 01:05:44.036727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.582 [2024-05-15 01:05:44.036743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.582 [2024-05-15 01:05:44.036757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.582 [2024-05-15 01:05:44.036772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.582 [2024-05-15 01:05:44.036785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.582 [2024-05-15 01:05:44.036800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.582 [2024-05-15 01:05:44.036813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.582 [2024-05-15 01:05:44.036828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.582 [2024-05-15 01:05:44.036841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.582 [2024-05-15 01:05:44.036855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.582 [2024-05-15 01:05:44.036869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.582 [2024-05-15 01:05:44.036883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.582 [2024-05-15 01:05:44.036896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.582 [2024-05-15 01:05:44.036948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.582 [2024-05-15 01:05:44.036965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.582 [2024-05-15 01:05:44.036980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.582 [2024-05-15 01:05:44.036994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.582 [2024-05-15 01:05:44.037009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.582 [2024-05-15 01:05:44.037022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.582 [2024-05-15 01:05:44.037037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.582 [2024-05-15 01:05:44.037051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.582 [2024-05-15 01:05:44.037066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.582 [2024-05-15 01:05:44.037080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.582 [2024-05-15 01:05:44.037095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.582 [2024-05-15 01:05:44.037108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.582 [2024-05-15 01:05:44.037123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.582 [2024-05-15 01:05:44.037137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.582 [2024-05-15 01:05:44.037151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.582 [2024-05-15 01:05:44.037165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.582 [2024-05-15 01:05:44.037180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.582 [2024-05-15 01:05:44.037194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.582 [2024-05-15 01:05:44.037213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.582 [2024-05-15 01:05:44.037226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.582 [2024-05-15 01:05:44.037241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.582 [2024-05-15 01:05:44.037261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.582 [2024-05-15 01:05:44.037277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.582 [2024-05-15 01:05:44.037292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.582 [2024-05-15 01:05:44.037308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.582 [2024-05-15 01:05:44.037326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.582 [2024-05-15 01:05:44.037342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.582 [2024-05-15 01:05:44.037356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.582 [2024-05-15 01:05:44.037371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.582 [2024-05-15 01:05:44.037385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.582 [2024-05-15 01:05:44.037399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.582 [2024-05-15 01:05:44.037413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.582 [2024-05-15 01:05:44.037428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.582 [2024-05-15 01:05:44.037442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.582 [2024-05-15 01:05:44.037457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.582 [2024-05-15 01:05:44.037470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.582 [2024-05-15 01:05:44.037485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.582 [2024-05-15 01:05:44.037499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.582 [2024-05-15 01:05:44.037513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.037527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.037542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.037556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.037571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.037584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.037599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.037613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.037628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.037642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.037657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.037670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.037689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.037703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.037718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.037732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.037747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.037762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.037776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.037790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.037805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.037819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.037834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.037848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.037862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.037876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.037891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.037905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.037921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.037958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.037976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.037990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.038006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.038020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.038035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.038049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.038065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.583 [2024-05-15 01:05:44.038079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.038099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.583 [2024-05-15 01:05:44.038113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.038129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.583 [2024-05-15 01:05:44.038143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.038158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.583 [2024-05-15 01:05:44.038172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.038186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.583 [2024-05-15 01:05:44.038200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.038215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.583 [2024-05-15 01:05:44.038229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.038258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.583 [2024-05-15 01:05:44.038273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.038287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.038301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.038315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.038329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.038344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.038357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.038371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.038384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.038399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.038413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.038427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.038441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.038455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.038472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.038488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.038502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.038516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.038530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.038544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.038558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.038572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.038586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.038600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.038613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.038628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.038641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.038655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.038669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.038683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.038697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.038711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.583 [2024-05-15 01:05:44.038725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.583 [2024-05-15 01:05:44.038739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.584 [2024-05-15 01:05:44.038754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.584 [2024-05-15 01:05:44.038769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.584 [2024-05-15 01:05:44.038783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.584 [2024-05-15 01:05:44.038797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.584 [2024-05-15 01:05:44.038810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.584 [2024-05-15 01:05:44.038828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.584 [2024-05-15 01:05:44.038843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.584 [2024-05-15 01:05:44.038858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.584 [2024-05-15 01:05:44.038871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.584 [2024-05-15 01:05:44.038885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.584 [2024-05-15 01:05:44.038898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.584 [2024-05-15 01:05:44.038913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.584 [2024-05-15 01:05:44.038926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.584 [2024-05-15 01:05:44.038964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.584 [2024-05-15 01:05:44.038978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.584 [2024-05-15 01:05:44.038993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.584 [2024-05-15 01:05:44.039007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.584 [2024-05-15 01:05:44.039022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.584 [2024-05-15 01:05:44.039037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.584 [2024-05-15 01:05:44.039051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.584 [2024-05-15 01:05:44.039065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.584 [2024-05-15 01:05:44.039080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.584 [2024-05-15 01:05:44.039095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.584 [2024-05-15 01:05:44.039110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.584 [2024-05-15 01:05:44.039123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.584 [2024-05-15 01:05:44.039138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.584 [2024-05-15 01:05:44.039152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.584 [2024-05-15 01:05:44.039166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.584 [2024-05-15 01:05:44.039180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.584 [2024-05-15 01:05:44.039195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.584 [2024-05-15 01:05:44.039213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.584 [2024-05-15 01:05:44.039258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.584 [2024-05-15 01:05:44.039276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74728 len:8 PRP1 0x0 PRP2 0x0 00:18:46.584 [2024-05-15 01:05:44.039289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.584 [2024-05-15 01:05:44.039307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.584 [2024-05-15 01:05:44.039318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.584 [2024-05-15 01:05:44.039329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74736 len:8 PRP1 0x0 PRP2 0x0 00:18:46.584 [2024-05-15 01:05:44.039342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.584 [2024-05-15 01:05:44.039354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.584 [2024-05-15 01:05:44.039365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.584 [2024-05-15 01:05:44.039376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74744 len:8 PRP1 0x0 PRP2 0x0 00:18:46.584 [2024-05-15 01:05:44.039388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.584 [2024-05-15 01:05:44.039401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.584 [2024-05-15 01:05:44.039411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.584 [2024-05-15 01:05:44.039422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74752 len:8 PRP1 0x0 PRP2 0x0 00:18:46.584 [2024-05-15 01:05:44.039434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.584 [2024-05-15 01:05:44.039447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.584 [2024-05-15 01:05:44.039457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.584 [2024-05-15 01:05:44.039468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74760 len:8 PRP1 0x0 PRP2 0x0 00:18:46.584 [2024-05-15 01:05:44.039481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.584 [2024-05-15 01:05:44.039493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.584 [2024-05-15 01:05:44.039503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.584 [2024-05-15 01:05:44.039514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74768 len:8 PRP1 0x0 PRP2 0x0 00:18:46.584 [2024-05-15 01:05:44.039527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.584 [2024-05-15 01:05:44.039539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.584 [2024-05-15 01:05:44.039550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.584 [2024-05-15 01:05:44.039560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74776 len:8 PRP1 0x0 PRP2 0x0 00:18:46.584 [2024-05-15 01:05:44.039572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.584 [2024-05-15 01:05:44.039585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.584 [2024-05-15 01:05:44.039596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.584 [2024-05-15 01:05:44.039607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74784 len:8 PRP1 0x0 PRP2 0x0 00:18:46.584 [2024-05-15 01:05:44.039623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.584 [2024-05-15 01:05:44.039636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.584 [2024-05-15 01:05:44.039647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.584 [2024-05-15 01:05:44.039658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74792 len:8 PRP1 0x0 PRP2 0x0 00:18:46.584 [2024-05-15 01:05:44.039671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.584 [2024-05-15 01:05:44.039683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.584 [2024-05-15 01:05:44.039694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.584 [2024-05-15 01:05:44.039705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74800 len:8 PRP1 0x0 PRP2 0x0 00:18:46.584 [2024-05-15 01:05:44.039717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.584 [2024-05-15 01:05:44.039729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.584 [2024-05-15 01:05:44.039740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.584 [2024-05-15 01:05:44.039751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74808 len:8 PRP1 0x0 PRP2 0x0 00:18:46.584 [2024-05-15 01:05:44.039763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.584 [2024-05-15 01:05:44.039776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.584 [2024-05-15 01:05:44.039786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.584 [2024-05-15 01:05:44.039797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74816 len:8 PRP1 0x0 PRP2 0x0 00:18:46.584 [2024-05-15 01:05:44.039810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.584 [2024-05-15 01:05:44.039822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.584 [2024-05-15 01:05:44.039833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.584 [2024-05-15 01:05:44.039844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74824 len:8 PRP1 0x0 PRP2 0x0 00:18:46.584 [2024-05-15 01:05:44.039856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.584 [2024-05-15 01:05:44.039868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.584 [2024-05-15 01:05:44.039879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.584 [2024-05-15 01:05:44.039890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74832 len:8 PRP1 0x0 PRP2 0x0 00:18:46.584 [2024-05-15 01:05:44.039903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.584 [2024-05-15 01:05:44.039938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.584 [2024-05-15 01:05:44.039951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.584 [2024-05-15 01:05:44.039963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74840 len:8 PRP1 0x0 PRP2 0x0 00:18:46.584 [2024-05-15 01:05:44.039976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.584 [2024-05-15 01:05:44.039989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.585 [2024-05-15 01:05:44.040000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.585 [2024-05-15 01:05:44.040015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74848 len:8 PRP1 0x0 PRP2 0x0 00:18:46.585 [2024-05-15 01:05:44.040028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.585 [2024-05-15 01:05:44.040041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.585 [2024-05-15 01:05:44.040053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.585 [2024-05-15 01:05:44.040064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74856 len:8 PRP1 0x0 PRP2 0x0 00:18:46.585 [2024-05-15 01:05:44.040076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.585 [2024-05-15 01:05:44.040089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.585 [2024-05-15 01:05:44.040101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.585 [2024-05-15 01:05:44.040112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74864 len:8 PRP1 0x0 PRP2 0x0 00:18:46.585 [2024-05-15 01:05:44.040125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.585 [2024-05-15 01:05:44.040138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.585 [2024-05-15 01:05:44.040148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.585 [2024-05-15 01:05:44.040159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74872 len:8 PRP1 0x0 PRP2 0x0 00:18:46.585 [2024-05-15 01:05:44.040172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.585 [2024-05-15 01:05:44.040185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.585 [2024-05-15 01:05:44.040196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.585 [2024-05-15 01:05:44.040207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74880 len:8 PRP1 0x0 PRP2 0x0 00:18:46.585 [2024-05-15 01:05:44.040219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.585 [2024-05-15 01:05:44.040248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.585 [2024-05-15 01:05:44.040259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.585 [2024-05-15 01:05:44.040270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74888 len:8 PRP1 0x0 PRP2 0x0 00:18:46.585 [2024-05-15 01:05:44.040282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.585 [2024-05-15 01:05:44.040294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.585 [2024-05-15 01:05:44.040304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.585 [2024-05-15 01:05:44.040315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74896 len:8 PRP1 0x0 PRP2 0x0 00:18:46.585 [2024-05-15 01:05:44.040328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.585 [2024-05-15 01:05:44.040340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.585 [2024-05-15 01:05:44.040350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.585 [2024-05-15 01:05:44.040361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74904 len:8 PRP1 0x0 PRP2 0x0 00:18:46.585 [2024-05-15 01:05:44.040373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.585 [2024-05-15 01:05:44.040386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.585 [2024-05-15 01:05:44.040399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.585 [2024-05-15 01:05:44.040411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74912 len:8 PRP1 0x0 PRP2 0x0 00:18:46.585 [2024-05-15 01:05:44.040423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.585 [2024-05-15 01:05:44.040435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.585 [2024-05-15 01:05:44.040458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.585 [2024-05-15 01:05:44.040470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74920 len:8 PRP1 0x0 PRP2 0x0 00:18:46.585 [2024-05-15 01:05:44.040482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.585 [2024-05-15 01:05:44.040495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.585 [2024-05-15 01:05:44.040505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.585 [2024-05-15 01:05:44.040516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74928 len:8 PRP1 0x0 PRP2 0x0 00:18:46.585 [2024-05-15 01:05:44.040529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.585 [2024-05-15 01:05:44.040541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.585 [2024-05-15 01:05:44.040552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.585 [2024-05-15 01:05:44.040563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73976 len:8 PRP1 0x0 PRP2 0x0 00:18:46.585 [2024-05-15 01:05:44.040575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.585 [2024-05-15 01:05:44.040587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.585 [2024-05-15 01:05:44.040598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.585 [2024-05-15 01:05:44.040609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73984 len:8 PRP1 0x0 PRP2 0x0 00:18:46.585 [2024-05-15 01:05:44.040621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.585 [2024-05-15 01:05:44.040634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.585 [2024-05-15 01:05:44.040644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.585 [2024-05-15 01:05:44.040655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73992 len:8 PRP1 0x0 PRP2 0x0 00:18:46.585 [2024-05-15 01:05:44.040667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.585 [2024-05-15 01:05:44.040680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.585 [2024-05-15 01:05:44.040691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.585 [2024-05-15 01:05:44.040702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74000 len:8 PRP1 0x0 PRP2 0x0 00:18:46.585 [2024-05-15 01:05:44.040714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.585 [2024-05-15 01:05:44.040726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.585 [2024-05-15 01:05:44.040737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.585 [2024-05-15 01:05:44.040748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74008 len:8 PRP1 0x0 PRP2 0x0 00:18:46.585 [2024-05-15 01:05:44.040760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.585 [2024-05-15 01:05:44.040776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.585 [2024-05-15 01:05:44.040787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.585 [2024-05-15 01:05:44.040798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74016 len:8 PRP1 0x0 PRP2 0x0 00:18:46.585 [2024-05-15 01:05:44.040811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.585 [2024-05-15 01:05:44.040823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.585 [2024-05-15 01:05:44.040839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.585 [2024-05-15 01:05:44.040851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74024 len:8 PRP1 0x0 PRP2 0x0 00:18:46.585 [2024-05-15 01:05:44.040863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.585 [2024-05-15 01:05:44.040876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.585 [2024-05-15 01:05:44.040887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.585 [2024-05-15 01:05:44.040898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74032 len:8 PRP1 0x0 PRP2 0x0 00:18:46.585 [2024-05-15 01:05:44.040916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.585 [2024-05-15 01:05:44.040952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.585 [2024-05-15 01:05:44.040967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.585 [2024-05-15 01:05:44.040979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74040 len:8 PRP1 0x0 PRP2 0x0 00:18:46.585 [2024-05-15 01:05:44.040992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.585 [2024-05-15 01:05:44.041005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.585 [2024-05-15 01:05:44.041016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.585 [2024-05-15 01:05:44.041027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74048 len:8 PRP1 0x0 PRP2 0x0 00:18:46.585 [2024-05-15 01:05:44.041040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.585 [2024-05-15 01:05:44.041053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.585 [2024-05-15 01:05:44.041064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.585 [2024-05-15 01:05:44.041075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74056 len:8 PRP1 0x0 PRP2 0x0 00:18:46.585 [2024-05-15 01:05:44.041088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.585 [2024-05-15 01:05:44.041100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.585 [2024-05-15 01:05:44.041111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.585 [2024-05-15 01:05:44.041123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74064 len:8 PRP1 0x0 PRP2 0x0 00:18:46.585 [2024-05-15 01:05:44.041136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.585 [2024-05-15 01:05:44.041149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.585 [2024-05-15 01:05:44.041159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.585 [2024-05-15 01:05:44.041171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74072 len:8 PRP1 0x0 PRP2 0x0 00:18:46.585 [2024-05-15 01:05:44.041187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.585 [2024-05-15 01:05:44.041201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.586 [2024-05-15 01:05:44.041212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.586 [2024-05-15 01:05:44.041224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74080 len:8 PRP1 0x0 PRP2 0x0 00:18:46.586 [2024-05-15 01:05:44.041250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.586 [2024-05-15 01:05:44.041264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.586 [2024-05-15 01:05:44.041280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.586 [2024-05-15 01:05:44.041292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74088 len:8 PRP1 0x0 PRP2 0x0 00:18:46.586 [2024-05-15 01:05:44.041304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.586 [2024-05-15 01:05:44.041362] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1376170 was disconnected and freed. reset controller. 00:18:46.586 [2024-05-15 01:05:44.041388] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:46.586 [2024-05-15 01:05:44.041435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.586 [2024-05-15 01:05:44.041460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.586 [2024-05-15 01:05:44.041476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.586 [2024-05-15 01:05:44.041489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.586 [2024-05-15 01:05:44.041503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.586 [2024-05-15 01:05:44.041516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.586 [2024-05-15 01:05:44.041530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.586 [2024-05-15 01:05:44.041543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.586 [2024-05-15 01:05:44.041556] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:46.586 [2024-05-15 01:05:44.044868] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:46.586 [2024-05-15 01:05:44.044906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13572f0 (9): Bad file descriptor 00:18:46.586 [2024-05-15 01:05:44.077760] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:46.586 [2024-05-15 01:05:47.701834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.586 [2024-05-15 01:05:47.701898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.586 [2024-05-15 01:05:47.701926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.586 [2024-05-15 01:05:47.701949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.586 [2024-05-15 01:05:47.701964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.586 [2024-05-15 01:05:47.701991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.586 [2024-05-15 01:05:47.702005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.586 [2024-05-15 01:05:47.702018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.586 [2024-05-15 01:05:47.702032] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13572f0 is same with the state(5) to be set 00:18:46.586 [2024-05-15 01:05:47.704529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.586 [2024-05-15 01:05:47.704554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.586 [2024-05-15 01:05:47.704577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.586 [2024-05-15 01:05:47.704593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.586 [2024-05-15 01:05:47.704609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.586 [2024-05-15 01:05:47.704624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.586 [2024-05-15 01:05:47.704639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.586 [2024-05-15 01:05:47.704654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.586 [2024-05-15 01:05:47.704669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.586 [2024-05-15 01:05:47.704683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.586 [2024-05-15 01:05:47.704698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.586 [2024-05-15 01:05:47.704712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.586 [2024-05-15 01:05:47.704728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.586 [2024-05-15 01:05:47.704742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.586 [2024-05-15 01:05:47.704757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.586 [2024-05-15 01:05:47.704771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.586 [2024-05-15 01:05:47.704787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.586 [2024-05-15 01:05:47.704801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.586 [2024-05-15 01:05:47.704816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.586 [2024-05-15 01:05:47.704846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.586 [2024-05-15 01:05:47.704862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.586 [2024-05-15 01:05:47.704881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.586 [2024-05-15 01:05:47.704897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.586 [2024-05-15 01:05:47.704911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.586 [2024-05-15 01:05:47.704925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.586 [2024-05-15 01:05:47.704966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.586 [2024-05-15 01:05:47.704984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.586 [2024-05-15 01:05:47.704998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.586 [2024-05-15 01:05:47.705013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.586 [2024-05-15 01:05:47.705027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.586 [2024-05-15 01:05:47.705042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.586 [2024-05-15 01:05:47.705057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.586 [2024-05-15 01:05:47.705072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.586 [2024-05-15 01:05:47.705101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.586 [2024-05-15 01:05:47.705118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.586 [2024-05-15 01:05:47.705132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.705147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.705160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.705175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.705188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.705202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.705216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.705230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.705258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.705272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.705285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.705303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.705318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.705332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.705345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.705360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.705373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.705387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.705400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.705414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.705427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.705441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.705454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.705468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.705481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.705496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.705509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.705522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.705535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.705550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.705563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.705577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.705590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.705604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.705617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.705631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.705648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.705662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.705675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.705689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.705703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.705717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.705731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.705745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.705757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.705771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.705785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.705798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.705811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.705825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.705839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.705853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.705866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.705881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.705894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.705908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.705944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.705961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.705989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.706005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.706019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.706034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.706052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.706068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.706082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.706097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.706112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.706127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.706141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.706156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.706169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.706184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.706198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.706213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.706241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.706256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.706270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.706284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.706313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.706328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.706341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.706355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.587 [2024-05-15 01:05:47.706368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.587 [2024-05-15 01:05:47.706382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.706395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.706409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.706422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.706441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.706454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.706469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.706482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.706496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.706509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.706523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.706536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.706551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.706564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.706579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.706592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.706606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.706619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.706633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.706647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.706660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.706673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.706687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.706701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.706715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.706728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.706742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.706755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.706769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.706785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.706800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.706813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.706827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.706840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.706854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.706867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.706881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.706894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.706908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:84256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.706921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.706956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.706972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.706987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.707001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.707016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.707030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.707045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.707058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.707074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.707087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.707102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.707116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.707130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.707143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.707161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.707176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.707192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:84328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.707205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.707220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.707248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.707263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.707276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.707290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.707303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.707317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:84360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.707330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.707345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.707358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.707372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.707385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.707399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.707412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.707426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.707439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.707453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.707467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.707481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.707494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.707508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.707524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.707540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.588 [2024-05-15 01:05:47.707553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.588 [2024-05-15 01:05:47.707566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.589 [2024-05-15 01:05:47.707580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:47.707594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.589 [2024-05-15 01:05:47.707607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:47.707621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.589 [2024-05-15 01:05:47.707634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:47.707647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.589 [2024-05-15 01:05:47.707660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:47.707675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.589 [2024-05-15 01:05:47.707688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:47.707702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.589 [2024-05-15 01:05:47.707715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:47.707730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.589 [2024-05-15 01:05:47.707743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:47.707757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.589 [2024-05-15 01:05:47.707770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:47.707784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.589 [2024-05-15 01:05:47.707797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:47.707811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:84504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.589 [2024-05-15 01:05:47.707824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:47.707838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.589 [2024-05-15 01:05:47.707851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:47.707868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.589 [2024-05-15 01:05:47.707882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:47.707896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.589 [2024-05-15 01:05:47.707910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:47.707924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.589 [2024-05-15 01:05:47.707963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:47.707979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:84544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.589 [2024-05-15 01:05:47.707993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:47.708008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.589 [2024-05-15 01:05:47.708022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:47.708037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.589 [2024-05-15 01:05:47.708050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:47.708065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.589 [2024-05-15 01:05:47.708078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:47.708093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.589 [2024-05-15 01:05:47.708107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:47.708121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.589 [2024-05-15 01:05:47.708135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:47.708150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.589 [2024-05-15 01:05:47.708164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:47.708178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.589 [2024-05-15 01:05:47.708192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:47.708206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.589 [2024-05-15 01:05:47.708220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:47.708234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.589 [2024-05-15 01:05:47.708263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:47.708282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.589 [2024-05-15 01:05:47.708296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:47.708310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:84576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.589 [2024-05-15 01:05:47.708323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:47.708338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.589 [2024-05-15 01:05:47.708351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:47.708380] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378060 is same with the state(5) to be set 00:18:46.589 [2024-05-15 01:05:47.708396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.589 [2024-05-15 01:05:47.708407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.589 [2024-05-15 01:05:47.708419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84592 len:8 PRP1 0x0 PRP2 0x0 00:18:46.589 [2024-05-15 01:05:47.708431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:47.708486] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1378060 was disconnected and freed. reset controller. 00:18:46.589 [2024-05-15 01:05:47.708504] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:18:46.589 [2024-05-15 01:05:47.708517] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:46.589 [2024-05-15 01:05:47.711858] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:46.589 [2024-05-15 01:05:47.711914] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13572f0 (9): Bad file descriptor 00:18:46.589 [2024-05-15 01:05:47.784064] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:46.589 [2024-05-15 01:05:52.271365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.589 [2024-05-15 01:05:52.271410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:52.271438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.589 [2024-05-15 01:05:52.271454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:52.271470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.589 [2024-05-15 01:05:52.271483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:52.271498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.589 [2024-05-15 01:05:52.271511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:52.271525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.589 [2024-05-15 01:05:52.271538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:52.271558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.589 [2024-05-15 01:05:52.271572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:52.271586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.589 [2024-05-15 01:05:52.271598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:52.271612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.589 [2024-05-15 01:05:52.271625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:52.271639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.589 [2024-05-15 01:05:52.271652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.589 [2024-05-15 01:05:52.271666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.589 [2024-05-15 01:05:52.271679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.271693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.271706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.271720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.271733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.271747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.271759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.271774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:30160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.271786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.271800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:30168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.271813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.271827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:30176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.271839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.271855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.271868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.271883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.271899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.271935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.271952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.271967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.271995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.272010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.272024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.272039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.272053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.272068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.272081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.272096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.272110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.272125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.272138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.272153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.272167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.272182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:30264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.272196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.272211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.272224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.272255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.272268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.272282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.272310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.272329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.272342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.272356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:30304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.272384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.272399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.272413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.272427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.272441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.272455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:30328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.272469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.272482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:30336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.272495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.272509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.272522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.272537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.272549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.272563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:30360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.272576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.272590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.272603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.272618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.272631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.272645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:30384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.272657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.272672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.272688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.272703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.590 [2024-05-15 01:05:52.272716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.590 [2024-05-15 01:05:52.272731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.272745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.272759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:30424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.272772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.272786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.272799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.272830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.272844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.272859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:30448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.272872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.272887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:30456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.272901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.272916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.272940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.272957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:30472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.272972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.272987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:30408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.591 [2024-05-15 01:05:52.273000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:30504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:30512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:30544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:30552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:30560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:30584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:30592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:30608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:30616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:30624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:30640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:30656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:30664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:30672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:30688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:30704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.591 [2024-05-15 01:05:52.273956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.591 [2024-05-15 01:05:52.273973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:30736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.592 [2024-05-15 01:05:52.273987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.592 [2024-05-15 01:05:52.274003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:30744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.592 [2024-05-15 01:05:52.274016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.592 [2024-05-15 01:05:52.274031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:30752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.592 [2024-05-15 01:05:52.274045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.592 [2024-05-15 01:05:52.274060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:30760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.592 [2024-05-15 01:05:52.274074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.592 [2024-05-15 01:05:52.274088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.592 [2024-05-15 01:05:52.274102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.592 [2024-05-15 01:05:52.274117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:30776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.592 [2024-05-15 01:05:52.274131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.592 [2024-05-15 01:05:52.274145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.592 [2024-05-15 01:05:52.274158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.592 [2024-05-15 01:05:52.274173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:30792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.592 [2024-05-15 01:05:52.274187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.592 [2024-05-15 01:05:52.274205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:30800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.592 [2024-05-15 01:05:52.274220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.592 [2024-05-15 01:05:52.274250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:30808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.592 [2024-05-15 01:05:52.274264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.592 [2024-05-15 01:05:52.274279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.592 [2024-05-15 01:05:52.274292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.592 [2024-05-15 01:05:52.274313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.592 [2024-05-15 01:05:52.274328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.592 [2024-05-15 01:05:52.274342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:30832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.592 [2024-05-15 01:05:52.274356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.592 [2024-05-15 01:05:52.274371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.592 [2024-05-15 01:05:52.274383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.592 [2024-05-15 01:05:52.274398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.592 [2024-05-15 01:05:52.274411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.592 [2024-05-15 01:05:52.274432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:46.592 [2024-05-15 01:05:52.274446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.592 [2024-05-15 01:05:52.274476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.592 [2024-05-15 01:05:52.274493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30864 len:8 PRP1 0x0 PRP2 0x0 00:18:46.592 [2024-05-15 01:05:52.274506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.592 [2024-05-15 01:05:52.274523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.592 [2024-05-15 01:05:52.274535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.592 [2024-05-15 01:05:52.274546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30872 len:8 PRP1 0x0 PRP2 0x0 00:18:46.592 [2024-05-15 01:05:52.274558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.592 [2024-05-15 01:05:52.274571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.592 [2024-05-15 01:05:52.274581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.592 [2024-05-15 01:05:52.274592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30880 len:8 PRP1 0x0 PRP2 0x0 00:18:46.592 [2024-05-15 01:05:52.274604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.592 [2024-05-15 01:05:52.274617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.592 [2024-05-15 01:05:52.274632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.592 [2024-05-15 01:05:52.274644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30888 len:8 PRP1 0x0 PRP2 0x0 00:18:46.592 [2024-05-15 01:05:52.274656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.592 [2024-05-15 01:05:52.274669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.592 [2024-05-15 01:05:52.274680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.592 [2024-05-15 01:05:52.274690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30896 len:8 PRP1 0x0 PRP2 0x0 00:18:46.592 [2024-05-15 01:05:52.274705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.592 [2024-05-15 01:05:52.274718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.592 [2024-05-15 01:05:52.274728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.592 [2024-05-15 01:05:52.274740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30904 len:8 PRP1 0x0 PRP2 0x0 00:18:46.592 [2024-05-15 01:05:52.274760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.592 [2024-05-15 01:05:52.274774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.592 [2024-05-15 01:05:52.274785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.592 [2024-05-15 01:05:52.274795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30912 len:8 PRP1 0x0 PRP2 0x0 00:18:46.592 [2024-05-15 01:05:52.274807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.592 [2024-05-15 01:05:52.274822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.592 [2024-05-15 01:05:52.274834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.592 [2024-05-15 01:05:52.274846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30920 len:8 PRP1 0x0 PRP2 0x0 00:18:46.592 [2024-05-15 01:05:52.274865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.592 [2024-05-15 01:05:52.274879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.592 [2024-05-15 01:05:52.274891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.592 [2024-05-15 01:05:52.274903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30928 len:8 PRP1 0x0 PRP2 0x0 00:18:46.592 [2024-05-15 01:05:52.274916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.592 [2024-05-15 01:05:52.274951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.592 [2024-05-15 01:05:52.274966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.592 [2024-05-15 01:05:52.274978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30936 len:8 PRP1 0x0 PRP2 0x0 00:18:46.592 [2024-05-15 01:05:52.274992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.592 [2024-05-15 01:05:52.275006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.592 [2024-05-15 01:05:52.275016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.592 [2024-05-15 01:05:52.275028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30944 len:8 PRP1 0x0 PRP2 0x0 00:18:46.592 [2024-05-15 01:05:52.275041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.592 [2024-05-15 01:05:52.275058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.592 [2024-05-15 01:05:52.275070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.592 [2024-05-15 01:05:52.275081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30952 len:8 PRP1 0x0 PRP2 0x0 00:18:46.592 [2024-05-15 01:05:52.275093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.592 [2024-05-15 01:05:52.275106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.592 [2024-05-15 01:05:52.275117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.592 [2024-05-15 01:05:52.275128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30960 len:8 PRP1 0x0 PRP2 0x0 00:18:46.592 [2024-05-15 01:05:52.275140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.592 [2024-05-15 01:05:52.275153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.592 [2024-05-15 01:05:52.275163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.592 [2024-05-15 01:05:52.275174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30968 len:8 PRP1 0x0 PRP2 0x0 00:18:46.592 [2024-05-15 01:05:52.275193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.592 [2024-05-15 01:05:52.275207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.592 [2024-05-15 01:05:52.275218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.592 [2024-05-15 01:05:52.275229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30976 len:8 PRP1 0x0 PRP2 0x0 00:18:46.592 [2024-05-15 01:05:52.275256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.592 [2024-05-15 01:05:52.275269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.592 [2024-05-15 01:05:52.275280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.593 [2024-05-15 01:05:52.275291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30984 len:8 PRP1 0x0 PRP2 0x0 00:18:46.593 [2024-05-15 01:05:52.275308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.593 [2024-05-15 01:05:52.275322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.593 [2024-05-15 01:05:52.275332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.593 [2024-05-15 01:05:52.275343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30992 len:8 PRP1 0x0 PRP2 0x0 00:18:46.593 [2024-05-15 01:05:52.275355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.593 [2024-05-15 01:05:52.275368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.593 [2024-05-15 01:05:52.275378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.593 [2024-05-15 01:05:52.275389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31000 len:8 PRP1 0x0 PRP2 0x0 00:18:46.593 [2024-05-15 01:05:52.275402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.593 [2024-05-15 01:05:52.275414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.593 [2024-05-15 01:05:52.275424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.593 [2024-05-15 01:05:52.275434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31008 len:8 PRP1 0x0 PRP2 0x0 00:18:46.593 [2024-05-15 01:05:52.275451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.593 [2024-05-15 01:05:52.275464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.593 [2024-05-15 01:05:52.275474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.593 [2024-05-15 01:05:52.275485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31016 len:8 PRP1 0x0 PRP2 0x0 00:18:46.593 [2024-05-15 01:05:52.275497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.593 [2024-05-15 01:05:52.275510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.593 [2024-05-15 01:05:52.275521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.593 [2024-05-15 01:05:52.275531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31024 len:8 PRP1 0x0 PRP2 0x0 00:18:46.593 [2024-05-15 01:05:52.275543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.593 [2024-05-15 01:05:52.290855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.593 [2024-05-15 01:05:52.290883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.593 [2024-05-15 01:05:52.290896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31032 len:8 PRP1 0x0 PRP2 0x0 00:18:46.593 [2024-05-15 01:05:52.290926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.593 [2024-05-15 01:05:52.290951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.593 [2024-05-15 01:05:52.290963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.593 [2024-05-15 01:05:52.290990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31040 len:8 PRP1 0x0 PRP2 0x0 00:18:46.593 [2024-05-15 01:05:52.291004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.593 [2024-05-15 01:05:52.291017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.593 [2024-05-15 01:05:52.291028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.593 [2024-05-15 01:05:52.291040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31048 len:8 PRP1 0x0 PRP2 0x0 00:18:46.593 [2024-05-15 01:05:52.291054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.593 [2024-05-15 01:05:52.291068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.593 [2024-05-15 01:05:52.291079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.593 [2024-05-15 01:05:52.291090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31056 len:8 PRP1 0x0 PRP2 0x0 00:18:46.593 [2024-05-15 01:05:52.291103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.593 [2024-05-15 01:05:52.291117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.593 [2024-05-15 01:05:52.291128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.593 [2024-05-15 01:05:52.291139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31064 len:8 PRP1 0x0 PRP2 0x0 00:18:46.593 [2024-05-15 01:05:52.291152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.593 [2024-05-15 01:05:52.291165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:46.593 [2024-05-15 01:05:52.291182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:46.593 [2024-05-15 01:05:52.291194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31072 len:8 PRP1 0x0 PRP2 0x0 00:18:46.593 [2024-05-15 01:05:52.291207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.593 [2024-05-15 01:05:52.291300] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x137ad10 was disconnected and freed. reset controller. 00:18:46.593 [2024-05-15 01:05:52.291319] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:18:46.593 [2024-05-15 01:05:52.291371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.593 [2024-05-15 01:05:52.291391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.593 [2024-05-15 01:05:52.291407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.593 [2024-05-15 01:05:52.291434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.593 [2024-05-15 01:05:52.291450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.593 [2024-05-15 01:05:52.291463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.593 [2024-05-15 01:05:52.291478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:46.593 [2024-05-15 01:05:52.291492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:46.593 [2024-05-15 01:05:52.291505] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:46.593 [2024-05-15 01:05:52.291564] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13572f0 (9): Bad file descriptor 00:18:46.593 [2024-05-15 01:05:52.294889] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:46.593 [2024-05-15 01:05:52.459142] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:46.593 00:18:46.593 Latency(us) 00:18:46.593 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.593 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:46.593 Verification LBA range: start 0x0 length 0x4000 00:18:46.593 NVMe0n1 : 15.01 8549.99 33.40 680.46 0.00 13837.94 1080.13 29515.47 00:18:46.593 =================================================================================================================== 00:18:46.593 Total : 8549.99 33.40 680.46 0.00 13837.94 1080.13 29515.47 00:18:46.593 Received shutdown signal, test time was about 15.000000 seconds 00:18:46.593 00:18:46.593 Latency(us) 00:18:46.593 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.593 =================================================================================================================== 00:18:46.593 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:46.593 01:05:58 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:46.593 01:05:58 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:18:46.593 01:05:58 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:18:46.593 01:05:58 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1303460 00:18:46.593 01:05:58 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:46.593 01:05:58 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1303460 /var/tmp/bdevperf.sock 00:18:46.593 01:05:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 1303460 ']' 00:18:46.593 01:05:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:46.593 01:05:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:46.593 01:05:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:46.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:46.593 01:05:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:46.593 01:05:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:46.593 01:05:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:46.593 01:05:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:18:46.593 01:05:58 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:46.593 [2024-05-15 01:05:58.785627] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:46.593 01:05:58 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:46.851 [2024-05-15 01:05:59.034360] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:18:46.851 01:05:59 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:47.109 NVMe0n1 00:18:47.109 01:05:59 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:47.676 00:18:47.676 01:05:59 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:47.933 00:18:47.933 01:06:00 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:47.933 01:06:00 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:18:48.190 01:06:00 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:48.447 01:06:00 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:18:51.817 01:06:03 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:51.817 01:06:03 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:18:51.817 01:06:04 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1304245 00:18:51.817 01:06:04 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:51.817 01:06:04 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1304245 00:18:53.190 0 00:18:53.190 01:06:05 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:53.190 [2024-05-15 01:05:58.261479] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:18:53.190 [2024-05-15 01:05:58.261580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1303460 ] 00:18:53.190 EAL: No free 2048 kB hugepages reported on node 1 00:18:53.190 [2024-05-15 01:05:58.330368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.190 [2024-05-15 01:05:58.435446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.190 [2024-05-15 01:06:00.748403] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:53.190 [2024-05-15 01:06:00.748502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:53.190 [2024-05-15 01:06:00.748527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:53.190 [2024-05-15 01:06:00.748543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:53.190 [2024-05-15 01:06:00.748558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:53.190 [2024-05-15 01:06:00.748572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:53.190 [2024-05-15 01:06:00.748586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:53.190 [2024-05-15 01:06:00.748600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:53.190 [2024-05-15 01:06:00.748614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:53.190 [2024-05-15 01:06:00.748627] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:53.190 [2024-05-15 01:06:00.748669] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:53.190 [2024-05-15 01:06:00.748701] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x225c2f0 (9): Bad file descriptor 00:18:53.190 [2024-05-15 01:06:00.841177] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:53.190 Running I/O for 1 seconds... 00:18:53.190 00:18:53.190 Latency(us) 00:18:53.190 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.190 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:53.190 Verification LBA range: start 0x0 length 0x4000 00:18:53.190 NVMe0n1 : 1.01 8986.29 35.10 0.00 0.00 14165.25 2949.12 17087.91 00:18:53.190 =================================================================================================================== 00:18:53.190 Total : 8986.29 35.10 0.00 0.00 14165.25 2949.12 17087.91 00:18:53.190 01:06:05 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:53.190 01:06:05 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:18:53.190 01:06:05 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:53.448 01:06:05 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:53.448 01:06:05 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:18:53.706 01:06:06 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:53.964 01:06:06 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:18:57.247 01:06:09 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:57.247 01:06:09 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:18:57.247 01:06:09 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1303460 00:18:57.247 01:06:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 1303460 ']' 00:18:57.247 01:06:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 1303460 00:18:57.247 01:06:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:18:57.247 01:06:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:57.247 01:06:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1303460 00:18:57.247 01:06:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:57.247 01:06:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:57.247 01:06:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1303460' 00:18:57.247 killing process with pid 1303460 00:18:57.247 01:06:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 1303460 00:18:57.247 01:06:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 1303460 00:18:57.505 01:06:09 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:18:57.505 01:06:09 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:57.763 01:06:10 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:57.763 01:06:10 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:58.021 01:06:10 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:18:58.021 01:06:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:58.021 01:06:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:18:58.021 01:06:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:58.021 01:06:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:18:58.021 01:06:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:58.021 01:06:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:58.021 rmmod nvme_tcp 00:18:58.021 rmmod nvme_fabrics 00:18:58.021 rmmod nvme_keyring 00:18:58.021 01:06:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:58.021 01:06:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:18:58.021 01:06:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:18:58.021 01:06:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1301177 ']' 00:18:58.021 01:06:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1301177 00:18:58.021 01:06:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 1301177 ']' 00:18:58.021 01:06:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 1301177 00:18:58.022 01:06:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:18:58.022 01:06:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:58.022 01:06:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1301177 00:18:58.022 01:06:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:58.022 01:06:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:58.022 01:06:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1301177' 00:18:58.022 killing process with pid 1301177 00:18:58.022 01:06:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 1301177 00:18:58.022 [2024-05-15 01:06:10.239638] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:58.022 01:06:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 1301177 00:18:58.281 01:06:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:58.281 01:06:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:58.281 01:06:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:58.281 01:06:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:58.281 01:06:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:58.281 01:06:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.281 01:06:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:58.281 01:06:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.817 01:06:12 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:00.817 00:19:00.817 real 0m36.586s 00:19:00.817 user 2m7.017s 00:19:00.817 sys 0m6.435s 00:19:00.817 01:06:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:00.817 01:06:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:00.817 ************************************ 00:19:00.817 END TEST nvmf_failover 00:19:00.817 ************************************ 00:19:00.817 01:06:12 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:00.817 01:06:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:00.817 01:06:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:00.817 01:06:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:00.817 ************************************ 00:19:00.817 START TEST nvmf_host_discovery 00:19:00.817 ************************************ 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:00.817 * Looking for test storage... 00:19:00.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:19:00.817 01:06:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:03.345 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:03.345 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:03.345 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:03.345 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:03.345 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:03.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:03.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:19:03.346 00:19:03.346 --- 10.0.0.2 ping statistics --- 00:19:03.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.346 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:03.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:03.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:19:03.346 00:19:03.346 --- 10.0.0.1 ping statistics --- 00:19:03.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.346 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1307767 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1307767 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 1307767 ']' 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:03.346 [2024-05-15 01:06:15.357591] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:19:03.346 [2024-05-15 01:06:15.357664] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.346 EAL: No free 2048 kB hugepages reported on node 1 00:19:03.346 [2024-05-15 01:06:15.432871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.346 [2024-05-15 01:06:15.543327] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:03.346 [2024-05-15 01:06:15.543378] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:03.346 [2024-05-15 01:06:15.543391] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:03.346 [2024-05-15 01:06:15.543402] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:03.346 [2024-05-15 01:06:15.543412] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:03.346 [2024-05-15 01:06:15.543464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:03.346 [2024-05-15 01:06:15.684411] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:03.346 [2024-05-15 01:06:15.692362] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:03.346 [2024-05-15 01:06:15.692649] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:03.346 null0 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:03.346 null1 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1307904 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1307904 /tmp/host.sock 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 1307904 ']' 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:03.346 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:03.346 01:06:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:03.605 [2024-05-15 01:06:15.767561] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:19:03.605 [2024-05-15 01:06:15.767641] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1307904 ] 00:19:03.605 EAL: No free 2048 kB hugepages reported on node 1 00:19:03.605 [2024-05-15 01:06:15.841731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.605 [2024-05-15 01:06:15.953840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:03.864 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.123 [2024-05-15 01:06:16.378429] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:04.123 01:06:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:04.382 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.382 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:19:04.382 01:06:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:19:04.948 [2024-05-15 01:06:17.103254] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:04.948 [2024-05-15 01:06:17.103299] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:04.948 [2024-05-15 01:06:17.103329] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:04.948 [2024-05-15 01:06:17.189609] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:19:04.948 [2024-05-15 01:06:17.294575] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:04.948 [2024-05-15 01:06:17.294601] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:05.205 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:05.206 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:05.206 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:19:05.206 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:05.206 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:05.206 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.206 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:05.206 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:05.206 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:05.206 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.206 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.206 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:05.206 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:05.206 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:05.206 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:05.206 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:05.206 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:19:05.206 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:19:05.206 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:05.206 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:05.206 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.206 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:05.206 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:05.206 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.464 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:05.465 [2024-05-15 01:06:17.822572] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:05.465 [2024-05-15 01:06:17.822983] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:05.465 [2024-05-15 01:06:17.823016] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:05.465 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:19:05.723 01:06:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:19:05.723 [2024-05-15 01:06:17.950415] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:19:05.981 [2024-05-15 01:06:18.249843] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:05.981 [2024-05-15 01:06:18.249869] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:05.981 [2024-05-15 01:06:18.249878] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:06.918 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:06.918 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:06.918 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:19:06.918 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:06.918 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.918 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:06.918 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:06.918 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:06.918 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:06.918 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.918 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:06.918 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:06.918 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:19:06.918 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:06.918 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:06.918 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:06.918 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:06.918 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:06.918 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:06.918 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:19:06.918 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:06.918 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.918 01:06:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:06.918 01:06:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:06.918 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.918 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:06.918 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:06.918 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:19:06.918 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:06.918 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:06.918 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.918 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:06.918 [2024-05-15 01:06:19.039409] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:06.918 [2024-05-15 01:06:19.039465] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:06.918 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.918 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:06.918 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:06.918 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:06.918 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:06.918 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:06.918 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:19:06.918 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:06.918 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:06.918 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.918 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:06.918 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:06.918 [2024-05-15 01:06:19.045982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:06.918 [2024-05-15 01:06:19.046016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.918 [2024-05-15 01:06:19.046033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:06.918 [2024-05-15 01:06:19.046049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.918 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:06.918 [2024-05-15 01:06:19.046063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:06.918 [2024-05-15 01:06:19.046084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.918 [2024-05-15 01:06:19.046099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:06.918 [2024-05-15 01:06:19.046114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.918 [2024-05-15 01:06:19.046129] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580840 is same with the state(5) to be set 00:19:06.918 [2024-05-15 01:06:19.055981] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1580840 (9): Bad file descriptor 00:19:06.918 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.918 [2024-05-15 01:06:19.066024] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:06.918 [2024-05-15 01:06:19.066317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:06.918 [2024-05-15 01:06:19.066514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:06.918 [2024-05-15 01:06:19.066541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1580840 with addr=10.0.0.2, port=4420 00:19:06.918 [2024-05-15 01:06:19.066557] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580840 is same with the state(5) to be set 00:19:06.918 [2024-05-15 01:06:19.066581] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1580840 (9): Bad file descriptor 00:19:06.918 [2024-05-15 01:06:19.066615] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:06.918 [2024-05-15 01:06:19.066635] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:06.918 [2024-05-15 01:06:19.066651] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:06.918 [2024-05-15 01:06:19.066671] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:06.918 [2024-05-15 01:06:19.076101] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:06.918 [2024-05-15 01:06:19.076560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:06.919 [2024-05-15 01:06:19.076812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:06.919 [2024-05-15 01:06:19.076843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1580840 with addr=10.0.0.2, port=4420 00:19:06.919 [2024-05-15 01:06:19.076862] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580840 is same with the state(5) to be set 00:19:06.919 [2024-05-15 01:06:19.076889] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1580840 (9): Bad file descriptor 00:19:06.919 [2024-05-15 01:06:19.076957] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:06.919 [2024-05-15 01:06:19.076995] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:06.919 [2024-05-15 01:06:19.077019] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:06.919 [2024-05-15 01:06:19.077041] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:06.919 [2024-05-15 01:06:19.086185] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:06.919 [2024-05-15 01:06:19.086455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:06.919 [2024-05-15 01:06:19.086652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:06.919 [2024-05-15 01:06:19.086679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1580840 with addr=10.0.0.2, port=4420 00:19:06.919 [2024-05-15 01:06:19.086695] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580840 is same with the state(5) to be set 00:19:06.919 [2024-05-15 01:06:19.086718] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1580840 (9): Bad file descriptor 00:19:06.919 [2024-05-15 01:06:19.086738] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:06.919 [2024-05-15 01:06:19.086751] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:06.919 [2024-05-15 01:06:19.086764] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:06.919 [2024-05-15 01:06:19.086783] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:06.919 [2024-05-15 01:06:19.096276] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:06.919 [2024-05-15 01:06:19.096548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:06.919 [2024-05-15 01:06:19.096749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:06.919 [2024-05-15 01:06:19.096774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1580840 with addr=10.0.0.2, port=4420 00:19:06.919 [2024-05-15 01:06:19.096790] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580840 is same with the state(5) to be set 00:19:06.919 [2024-05-15 01:06:19.096811] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1580840 (9): Bad file descriptor 00:19:06.919 [2024-05-15 01:06:19.096844] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:06.919 [2024-05-15 01:06:19.096862] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:06.919 [2024-05-15 01:06:19.096881] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:06.919 [2024-05-15 01:06:19.096900] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:06.919 [2024-05-15 01:06:19.106358] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:06.919 [2024-05-15 01:06:19.106626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:06.919 [2024-05-15 01:06:19.106809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:06.919 [2024-05-15 01:06:19.106835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1580840 with addr=10.0.0.2, port=4420 00:19:06.919 [2024-05-15 01:06:19.106852] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580840 is same with the state(5) to be set 00:19:06.919 [2024-05-15 01:06:19.106874] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1580840 (9): Bad file descriptor 00:19:06.919 [2024-05-15 01:06:19.106907] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:06.919 [2024-05-15 01:06:19.106926] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:06.919 [2024-05-15 01:06:19.106952] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:06.919 [2024-05-15 01:06:19.106972] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.919 [2024-05-15 01:06:19.116437] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:06.919 [2024-05-15 01:06:19.116693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:06.919 [2024-05-15 01:06:19.116872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:06.919 [2024-05-15 01:06:19.116896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1580840 with addr=10.0.0.2, port=4420 00:19:06.919 [2024-05-15 01:06:19.116928] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580840 is same with the state(5) to be set 00:19:06.919 [2024-05-15 01:06:19.116960] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1580840 (9): Bad file descriptor 00:19:06.919 [2024-05-15 01:06:19.116995] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:06.919 [2024-05-15 01:06:19.117015] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:06.919 [2024-05-15 01:06:19.117029] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:06.919 [2024-05-15 01:06:19.117048] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:06.919 [2024-05-15 01:06:19.126513] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:06.919 [2024-05-15 01:06:19.126785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:06.919 [2024-05-15 01:06:19.126960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:06.919 [2024-05-15 01:06:19.126988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1580840 with addr=10.0.0.2, port=4420 00:19:06.919 [2024-05-15 01:06:19.127004] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580840 is same with the state(5) to be set 00:19:06.919 [2024-05-15 01:06:19.127027] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1580840 (9): Bad file descriptor 00:19:06.919 [2024-05-15 01:06:19.127079] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:19:06.919 [2024-05-15 01:06:19.127106] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:06.919 [2024-05-15 01:06:19.127144] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:06.919 [2024-05-15 01:06:19.127165] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:06.919 [2024-05-15 01:06:19.127179] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:06.919 [2024-05-15 01:06:19.127202] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:06.919 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:06.920 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.221 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:19:07.221 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:07.221 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:19:07.221 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:19:07.221 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:07.221 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:07.221 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:19:07.221 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:19:07.221 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:07.221 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:19:07.221 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:07.221 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:07.221 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.221 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:07.221 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.221 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:19:07.221 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:19:07.221 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:19:07.221 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:19:07.221 01:06:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:07.221 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.221 01:06:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:08.160 [2024-05-15 01:06:20.381803] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:08.160 [2024-05-15 01:06:20.381833] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:08.160 [2024-05-15 01:06:20.381858] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:08.160 [2024-05-15 01:06:20.510342] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:19:08.418 [2024-05-15 01:06:20.616704] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:08.418 [2024-05-15 01:06:20.616750] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:08.418 request: 00:19:08.418 { 00:19:08.418 "name": "nvme", 00:19:08.418 "trtype": "tcp", 00:19:08.418 "traddr": "10.0.0.2", 00:19:08.418 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:08.418 "adrfam": "ipv4", 00:19:08.418 "trsvcid": "8009", 00:19:08.418 "wait_for_attach": true, 00:19:08.418 "method": "bdev_nvme_start_discovery", 00:19:08.418 "req_id": 1 00:19:08.418 } 00:19:08.418 Got JSON-RPC error response 00:19:08.418 response: 00:19:08.418 { 00:19:08.418 "code": -17, 00:19:08.418 "message": "File exists" 00:19:08.418 } 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:08.418 request: 00:19:08.418 { 00:19:08.418 "name": "nvme_second", 00:19:08.418 "trtype": "tcp", 00:19:08.418 "traddr": "10.0.0.2", 00:19:08.418 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:08.418 "adrfam": "ipv4", 00:19:08.418 "trsvcid": "8009", 00:19:08.418 "wait_for_attach": true, 00:19:08.418 "method": "bdev_nvme_start_discovery", 00:19:08.418 "req_id": 1 00:19:08.418 } 00:19:08.418 Got JSON-RPC error response 00:19:08.418 response: 00:19:08.418 { 00:19:08.418 "code": -17, 00:19:08.418 "message": "File exists" 00:19:08.418 } 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:19:08.418 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:08.419 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:08.419 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:08.419 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:08.419 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:08.419 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:08.419 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.419 01:06:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:09.433 [2024-05-15 01:06:21.817167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:09.433 [2024-05-15 01:06:21.817398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:09.433 [2024-05-15 01:06:21.817427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x157c8b0 with addr=10.0.0.2, port=8010 00:19:09.433 [2024-05-15 01:06:21.817455] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:09.433 [2024-05-15 01:06:21.817469] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:09.433 [2024-05-15 01:06:21.817482] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:19:10.806 [2024-05-15 01:06:22.819592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:10.806 [2024-05-15 01:06:22.819817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:10.806 [2024-05-15 01:06:22.819849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x159adc0 with addr=10.0.0.2, port=8010 00:19:10.806 [2024-05-15 01:06:22.819882] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:10.806 [2024-05-15 01:06:22.819899] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:10.806 [2024-05-15 01:06:22.819915] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:19:11.739 [2024-05-15 01:06:23.821728] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:19:11.739 request: 00:19:11.739 { 00:19:11.739 "name": "nvme_second", 00:19:11.739 "trtype": "tcp", 00:19:11.739 "traddr": "10.0.0.2", 00:19:11.739 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:11.739 "adrfam": "ipv4", 00:19:11.739 "trsvcid": "8010", 00:19:11.739 "attach_timeout_ms": 3000, 00:19:11.739 "method": "bdev_nvme_start_discovery", 00:19:11.739 "req_id": 1 00:19:11.739 } 00:19:11.739 Got JSON-RPC error response 00:19:11.739 response: 00:19:11.739 { 00:19:11.739 "code": -110, 00:19:11.739 "message": "Connection timed out" 00:19:11.739 } 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1307904 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:11.739 rmmod nvme_tcp 00:19:11.739 rmmod nvme_fabrics 00:19:11.739 rmmod nvme_keyring 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1307767 ']' 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1307767 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 1307767 ']' 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 1307767 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1307767 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1307767' 00:19:11.739 killing process with pid 1307767 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 1307767 00:19:11.739 [2024-05-15 01:06:23.958482] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:11.739 01:06:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 1307767 00:19:11.999 01:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:11.999 01:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:11.999 01:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:11.999 01:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:11.999 01:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:11.999 01:06:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.999 01:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:11.999 01:06:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.900 01:06:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:13.900 00:19:13.900 real 0m13.637s 00:19:13.900 user 0m19.252s 00:19:13.900 sys 0m3.073s 00:19:13.900 01:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:13.900 01:06:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:13.900 ************************************ 00:19:13.900 END TEST nvmf_host_discovery 00:19:13.900 ************************************ 00:19:14.158 01:06:26 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:14.158 01:06:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:14.158 01:06:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:14.158 01:06:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:14.158 ************************************ 00:19:14.158 START TEST nvmf_host_multipath_status 00:19:14.158 ************************************ 00:19:14.158 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:14.158 * Looking for test storage... 00:19:14.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:14.158 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:14.158 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:19:14.158 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:14.158 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:14.158 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:14.158 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:14.158 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:14.158 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:14.158 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:14.158 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:14.158 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:14.158 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:14.158 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:14.158 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:14.158 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:14.158 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:14.158 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:14.158 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:14.158 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:14.158 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:14.158 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:14.158 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:14.158 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.158 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.158 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.158 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:19:14.158 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.158 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:19:14.159 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:14.159 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:14.159 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:14.159 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:14.159 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:14.159 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:14.159 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:14.159 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:14.159 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:14.159 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:14.159 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:14.159 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:19:14.159 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:14.159 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:14.159 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:19:14.159 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:14.159 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:14.159 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:14.159 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:14.159 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:14.159 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.159 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:14.159 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.159 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:14.159 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:14.159 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:19:14.159 01:06:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:16.687 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:16.687 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:16.687 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:16.687 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:16.687 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:16.688 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:16.688 01:06:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:16.688 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:16.688 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:16.688 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:16.688 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:16.688 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:16.688 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:16.946 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:16.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:16.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:19:16.946 00:19:16.946 --- 10.0.0.2 ping statistics --- 00:19:16.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.946 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:19:16.946 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:16.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:16.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:19:16.946 00:19:16.946 --- 10.0.0.1 ping statistics --- 00:19:16.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.946 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:19:16.946 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:16.946 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:19:16.946 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:16.946 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:16.946 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:16.946 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:16.946 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:16.946 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:16.946 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:16.946 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:19:16.946 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:16.946 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:16.946 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:16.946 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1311236 00:19:16.946 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:16.946 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1311236 00:19:16.946 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 1311236 ']' 00:19:16.946 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.946 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:16.946 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.946 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:16.946 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:16.946 [2024-05-15 01:06:29.149901] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:19:16.946 [2024-05-15 01:06:29.149997] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.946 EAL: No free 2048 kB hugepages reported on node 1 00:19:16.946 [2024-05-15 01:06:29.230622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:17.204 [2024-05-15 01:06:29.350872] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:17.204 [2024-05-15 01:06:29.350919] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:17.204 [2024-05-15 01:06:29.350949] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:17.204 [2024-05-15 01:06:29.350961] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:17.204 [2024-05-15 01:06:29.350971] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:17.204 [2024-05-15 01:06:29.354955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.204 [2024-05-15 01:06:29.354968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.204 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:17.204 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:19:17.204 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:17.204 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:17.204 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:17.204 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:17.204 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1311236 00:19:17.204 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:17.461 [2024-05-15 01:06:29.713281] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:17.461 01:06:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:17.718 Malloc0 00:19:17.718 01:06:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:17.975 01:06:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:18.233 01:06:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:18.490 [2024-05-15 01:06:30.757796] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:18.490 [2024-05-15 01:06:30.758111] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:18.490 01:06:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:18.748 [2024-05-15 01:06:30.994685] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:18.748 01:06:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1311516 00:19:18.748 01:06:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:18.748 01:06:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1311516 /var/tmp/bdevperf.sock 00:19:18.748 01:06:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:18.748 01:06:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 1311516 ']' 00:19:18.748 01:06:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:18.748 01:06:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:18.748 01:06:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:18.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:18.748 01:06:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:18.748 01:06:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:19.680 01:06:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:19.680 01:06:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:19:19.680 01:06:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:19.938 01:06:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:19:20.503 Nvme0n1 00:19:20.503 01:06:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:21.068 Nvme0n1 00:19:21.068 01:06:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:19:21.068 01:06:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:22.989 01:06:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:19:22.989 01:06:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:19:23.259 01:06:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:23.516 01:06:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:19:24.451 01:06:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:19:24.451 01:06:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:24.451 01:06:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:24.451 01:06:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:24.709 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:24.709 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:24.709 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:24.709 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:24.967 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:24.967 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:24.967 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:24.967 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:25.226 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:25.226 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:25.226 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:25.226 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:25.484 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:25.484 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:25.484 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:25.484 01:06:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:25.743 01:06:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:25.743 01:06:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:25.743 01:06:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:25.743 01:06:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:26.000 01:06:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:26.000 01:06:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:19:26.000 01:06:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:26.257 01:06:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:26.515 01:06:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:19:27.887 01:06:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:19:27.887 01:06:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:27.887 01:06:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:27.887 01:06:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:27.887 01:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:27.887 01:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:27.887 01:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:27.887 01:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:28.145 01:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:28.145 01:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:28.145 01:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:28.145 01:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:28.403 01:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:28.403 01:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:28.403 01:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:28.403 01:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:28.660 01:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:28.660 01:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:28.661 01:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:28.661 01:06:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:28.918 01:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:28.918 01:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:28.918 01:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:28.918 01:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:29.176 01:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:29.176 01:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:19:29.176 01:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:29.433 01:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:19:29.691 01:06:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:19:30.624 01:06:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:19:30.624 01:06:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:30.624 01:06:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:30.624 01:06:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:30.882 01:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:30.882 01:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:30.882 01:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:30.882 01:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:31.139 01:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:31.139 01:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:31.139 01:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:31.139 01:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:31.396 01:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:31.396 01:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:31.396 01:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:31.396 01:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:31.653 01:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:31.653 01:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:31.653 01:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:31.653 01:06:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:31.910 01:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:31.910 01:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:31.910 01:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:31.910 01:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:32.168 01:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:32.168 01:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:19:32.168 01:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:32.427 01:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:32.685 01:06:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:19:33.619 01:06:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:19:33.619 01:06:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:33.619 01:06:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:33.619 01:06:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:33.878 01:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:33.878 01:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:33.878 01:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:33.878 01:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:34.135 01:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:34.135 01:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:34.135 01:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:34.135 01:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:34.393 01:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:34.393 01:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:34.393 01:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:34.393 01:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:34.651 01:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:34.651 01:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:34.651 01:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:34.651 01:06:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:34.909 01:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:34.909 01:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:34.909 01:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:34.909 01:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:35.167 01:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:35.167 01:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:19:35.167 01:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:35.424 01:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:35.681 01:06:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:19:36.613 01:06:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:19:36.613 01:06:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:36.613 01:06:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:36.613 01:06:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:36.870 01:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:36.870 01:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:36.870 01:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:36.870 01:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:37.127 01:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:37.127 01:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:37.127 01:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:37.127 01:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:37.384 01:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:37.384 01:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:37.385 01:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:37.385 01:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:37.642 01:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:37.642 01:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:37.642 01:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:37.642 01:06:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:37.899 01:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:37.899 01:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:37.899 01:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:37.899 01:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:38.156 01:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:38.156 01:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:19:38.156 01:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:38.414 01:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:38.671 01:06:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:19:39.603 01:06:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:19:39.603 01:06:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:39.603 01:06:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:39.603 01:06:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:39.861 01:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:39.861 01:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:39.861 01:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:39.861 01:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:40.119 01:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:40.119 01:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:40.119 01:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:40.119 01:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:40.377 01:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:40.377 01:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:40.377 01:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:40.377 01:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:40.634 01:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:40.634 01:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:40.634 01:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:40.634 01:06:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:40.892 01:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:40.892 01:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:40.892 01:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:40.892 01:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:41.150 01:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:41.150 01:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:19:41.408 01:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:19:41.408 01:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:19:41.666 01:06:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:41.924 01:06:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:19:42.857 01:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:19:42.857 01:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:42.857 01:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:42.857 01:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:43.114 01:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:43.114 01:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:43.114 01:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:43.114 01:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:43.372 01:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:43.372 01:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:43.372 01:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:43.372 01:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:43.630 01:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:43.630 01:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:43.630 01:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:43.630 01:06:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:43.888 01:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:43.888 01:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:43.888 01:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:43.888 01:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:44.146 01:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:44.146 01:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:44.146 01:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:44.146 01:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:44.404 01:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:44.404 01:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:19:44.404 01:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:44.661 01:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:44.918 01:06:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:19:45.851 01:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:19:45.851 01:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:45.852 01:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:45.852 01:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:46.109 01:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:46.109 01:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:46.109 01:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:46.109 01:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:46.366 01:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:46.366 01:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:46.366 01:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:46.366 01:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:46.625 01:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:46.625 01:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:46.625 01:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:46.625 01:06:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:46.884 01:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:46.884 01:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:46.884 01:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:46.884 01:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:47.142 01:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:47.142 01:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:47.142 01:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:47.142 01:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:47.400 01:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:47.400 01:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:19:47.400 01:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:47.658 01:06:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:19:47.916 01:07:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:19:48.849 01:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:19:48.849 01:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:48.849 01:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:48.849 01:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:49.106 01:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:49.106 01:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:49.106 01:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:49.106 01:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:49.365 01:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:49.365 01:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:49.365 01:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:49.365 01:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:49.649 01:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:49.649 01:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:49.649 01:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:49.649 01:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:49.907 01:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:49.907 01:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:49.907 01:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:49.907 01:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:50.164 01:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:50.164 01:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:50.164 01:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:50.164 01:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:50.421 01:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:50.421 01:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:19:50.421 01:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:50.678 01:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:50.935 01:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:19:51.866 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:19:51.866 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:51.866 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:51.866 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:52.123 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:52.123 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:52.123 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.123 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:52.381 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:52.381 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:52.381 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.381 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:52.638 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:52.638 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:52.638 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.638 01:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:52.895 01:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:52.896 01:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:52.896 01:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.896 01:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:53.153 01:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:53.153 01:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:53.153 01:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:53.153 01:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:53.410 01:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:53.410 01:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1311516 00:19:53.410 01:07:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 1311516 ']' 00:19:53.410 01:07:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 1311516 00:19:53.410 01:07:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:19:53.410 01:07:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:53.410 01:07:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1311516 00:19:53.410 01:07:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:53.410 01:07:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:53.410 01:07:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1311516' 00:19:53.410 killing process with pid 1311516 00:19:53.410 01:07:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 1311516 00:19:53.410 01:07:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 1311516 00:19:53.687 Connection closed with partial response: 00:19:53.687 00:19:53.687 00:19:53.687 01:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1311516 00:19:53.687 01:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:53.687 [2024-05-15 01:06:31.056514] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:19:53.687 [2024-05-15 01:06:31.056611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1311516 ] 00:19:53.687 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.687 [2024-05-15 01:06:31.126859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.687 [2024-05-15 01:06:31.238162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.687 Running I/O for 90 seconds... 00:19:53.687 [2024-05-15 01:06:47.627805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.687 [2024-05-15 01:06:47.627872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:53.687 [2024-05-15 01:06:47.627915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.687 [2024-05-15 01:06:47.627953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:53.687 [2024-05-15 01:06:47.627979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.687 [2024-05-15 01:06:47.627996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:53.687 [2024-05-15 01:06:47.628018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.687 [2024-05-15 01:06:47.628034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:53.687 [2024-05-15 01:06:47.628055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.687 [2024-05-15 01:06:47.628071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:53.687 [2024-05-15 01:06:47.628093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.687 [2024-05-15 01:06:47.628108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:53.687 [2024-05-15 01:06:47.628130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.687 [2024-05-15 01:06:47.628146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:53.687 [2024-05-15 01:06:47.628168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.687 [2024-05-15 01:06:47.628184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:53.687 [2024-05-15 01:06:47.628430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.687 [2024-05-15 01:06:47.628454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:53.687 [2024-05-15 01:06:47.628483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.687 [2024-05-15 01:06:47.628501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:53.687 [2024-05-15 01:06:47.628524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.687 [2024-05-15 01:06:47.628566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:53.687 [2024-05-15 01:06:47.628589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.687 [2024-05-15 01:06:47.628606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:53.687 [2024-05-15 01:06:47.628643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.687 [2024-05-15 01:06:47.628658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:53.687 [2024-05-15 01:06:47.628679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.687 [2024-05-15 01:06:47.628694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:53.687 [2024-05-15 01:06:47.628714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.687 [2024-05-15 01:06:47.628729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:53.687 [2024-05-15 01:06:47.628749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.687 [2024-05-15 01:06:47.628765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:53.687 [2024-05-15 01:06:47.628786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.687 [2024-05-15 01:06:47.628802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:53.687 [2024-05-15 01:06:47.628822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.687 [2024-05-15 01:06:47.628837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:53.687 [2024-05-15 01:06:47.628858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.687 [2024-05-15 01:06:47.628890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:53.687 [2024-05-15 01:06:47.628912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.687 [2024-05-15 01:06:47.628950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:53.687 [2024-05-15 01:06:47.628989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.687 [2024-05-15 01:06:47.629007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:53.687 [2024-05-15 01:06:47.629029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.687 [2024-05-15 01:06:47.629046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:53.687 [2024-05-15 01:06:47.629068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.687 [2024-05-15 01:06:47.629089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:53.687 [2024-05-15 01:06:47.629113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.687 [2024-05-15 01:06:47.629129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:53.687 [2024-05-15 01:06:47.629152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.687 [2024-05-15 01:06:47.629168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.629192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.688 [2024-05-15 01:06:47.629208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.629230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.688 [2024-05-15 01:06:47.629246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.629284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.688 [2024-05-15 01:06:47.629299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.629319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.688 [2024-05-15 01:06:47.629334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.629355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.688 [2024-05-15 01:06:47.629370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.629391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.688 [2024-05-15 01:06:47.629406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.629442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.688 [2024-05-15 01:06:47.629456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.629477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.688 [2024-05-15 01:06:47.629491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.629511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.688 [2024-05-15 01:06:47.629526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.629545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.688 [2024-05-15 01:06:47.629560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.629584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.688 [2024-05-15 01:06:47.629599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.629619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.688 [2024-05-15 01:06:47.629633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.629653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.688 [2024-05-15 01:06:47.629668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.629688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.688 [2024-05-15 01:06:47.629703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.629723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:117592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.688 [2024-05-15 01:06:47.629737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.629757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.688 [2024-05-15 01:06:47.629772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.629792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.688 [2024-05-15 01:06:47.629807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.630156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.688 [2024-05-15 01:06:47.630179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.630205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.688 [2024-05-15 01:06:47.630223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.630245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.688 [2024-05-15 01:06:47.630280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.630303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.688 [2024-05-15 01:06:47.630319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.631329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.688 [2024-05-15 01:06:47.631351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.631382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.688 [2024-05-15 01:06:47.631399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.631420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.688 [2024-05-15 01:06:47.631436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.631457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.688 [2024-05-15 01:06:47.631472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.631492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.688 [2024-05-15 01:06:47.631507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.631543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.688 [2024-05-15 01:06:47.631558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.631578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.688 [2024-05-15 01:06:47.631608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.631630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.688 [2024-05-15 01:06:47.631645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.631666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.688 [2024-05-15 01:06:47.631682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.631703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.688 [2024-05-15 01:06:47.631718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.631739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.688 [2024-05-15 01:06:47.631755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.631777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.688 [2024-05-15 01:06:47.631792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:53.688 [2024-05-15 01:06:47.631814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.631829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.631850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.631870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.631907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.631923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.631952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.631968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.631989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.632004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.632225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.632251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.632278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.632295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.632318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.632333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.632355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.632374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.632397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.632413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.632434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.632449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.632471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.632486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.632507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.632538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.632560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.632579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.632600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.632616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.632636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.632652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.632672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.632687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.632708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.632722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.632742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.632757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.632777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.632793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.632813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.632828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.632848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.632863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.632883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.632898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.632941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.632959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.632981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.632997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.633019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.633035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.633061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.633078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.633100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.633116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.633137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.633152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.633173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.633189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.633210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.633240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.633263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.633278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.633299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.633314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.633335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:118280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.689 [2024-05-15 01:06:47.633350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:53.689 [2024-05-15 01:06:47.633371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.690 [2024-05-15 01:06:47.633387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.633408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:118296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.690 [2024-05-15 01:06:47.633423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.633443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:118304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.690 [2024-05-15 01:06:47.633459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.633479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.690 [2024-05-15 01:06:47.633495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.633519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:118320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.690 [2024-05-15 01:06:47.633535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.633557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.690 [2024-05-15 01:06:47.633572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.633593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.690 [2024-05-15 01:06:47.633609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.633629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:118344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.690 [2024-05-15 01:06:47.633644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.633665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.690 [2024-05-15 01:06:47.633680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.633701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.690 [2024-05-15 01:06:47.633717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.633737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.690 [2024-05-15 01:06:47.633753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.633773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.690 [2024-05-15 01:06:47.633790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.633811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.690 [2024-05-15 01:06:47.633826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.633847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.690 [2024-05-15 01:06:47.633862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.633882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.690 [2024-05-15 01:06:47.633897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.633939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:117624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.690 [2024-05-15 01:06:47.633957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.633986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.690 [2024-05-15 01:06:47.634003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.634024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:117640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.690 [2024-05-15 01:06:47.634040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.634601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.690 [2024-05-15 01:06:47.634625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.634652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:117656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.690 [2024-05-15 01:06:47.634669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.634693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:117664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.690 [2024-05-15 01:06:47.634709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.634732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:117672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.690 [2024-05-15 01:06:47.634748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.634770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.690 [2024-05-15 01:06:47.634785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.634807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.690 [2024-05-15 01:06:47.634823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.634860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:117696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.690 [2024-05-15 01:06:47.634876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.634897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.690 [2024-05-15 01:06:47.634938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.634965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:117712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.690 [2024-05-15 01:06:47.634982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.635004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.690 [2024-05-15 01:06:47.635020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.635042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.690 [2024-05-15 01:06:47.635063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.635087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.690 [2024-05-15 01:06:47.635104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.635126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:117744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.690 [2024-05-15 01:06:47.635142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.635164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.690 [2024-05-15 01:06:47.635180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.635218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.690 [2024-05-15 01:06:47.635235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:53.690 [2024-05-15 01:06:47.635257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.690 [2024-05-15 01:06:47.635287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.635309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.691 [2024-05-15 01:06:47.635324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.635345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.691 [2024-05-15 01:06:47.635360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.635381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.691 [2024-05-15 01:06:47.635396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.635416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.691 [2024-05-15 01:06:47.635431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.635452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.691 [2024-05-15 01:06:47.635467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.635488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.691 [2024-05-15 01:06:47.635503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.635525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.691 [2024-05-15 01:06:47.635544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.635565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.691 [2024-05-15 01:06:47.635581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.635601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.691 [2024-05-15 01:06:47.635617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.635637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.691 [2024-05-15 01:06:47.635652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.635673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.691 [2024-05-15 01:06:47.635688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.635709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.691 [2024-05-15 01:06:47.635725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.635746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.691 [2024-05-15 01:06:47.635761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.635782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.691 [2024-05-15 01:06:47.635797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.635818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.691 [2024-05-15 01:06:47.635834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.635855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.691 [2024-05-15 01:06:47.635870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.635891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.691 [2024-05-15 01:06:47.635906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.635952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.691 [2024-05-15 01:06:47.635969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.635991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.691 [2024-05-15 01:06:47.636006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.636032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.691 [2024-05-15 01:06:47.636049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.636071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.691 [2024-05-15 01:06:47.636087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.636108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.691 [2024-05-15 01:06:47.636123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.636144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.691 [2024-05-15 01:06:47.636160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.636181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.691 [2024-05-15 01:06:47.636196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.636232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.691 [2024-05-15 01:06:47.636249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.636270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.691 [2024-05-15 01:06:47.636285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.636306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.691 [2024-05-15 01:06:47.636320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.636348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.691 [2024-05-15 01:06:47.636364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.636385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.691 [2024-05-15 01:06:47.636400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.636421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.691 [2024-05-15 01:06:47.636436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.636457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.691 [2024-05-15 01:06:47.636472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.636497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.691 [2024-05-15 01:06:47.636513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.636533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.691 [2024-05-15 01:06:47.636548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:53.691 [2024-05-15 01:06:47.636569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.691 [2024-05-15 01:06:47.636584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:53.692 [2024-05-15 01:06:47.636605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.692 [2024-05-15 01:06:47.636620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:53.692 [2024-05-15 01:06:47.636640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.692 [2024-05-15 01:06:47.636656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:53.692 [2024-05-15 01:06:47.636676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.692 [2024-05-15 01:06:47.636691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:53.692 [2024-05-15 01:06:47.636711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.692 [2024-05-15 01:06:47.636727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:53.692 [2024-05-15 01:06:47.636747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.692 [2024-05-15 01:06:47.636762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:53.692 [2024-05-15 01:06:47.636782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.692 [2024-05-15 01:06:47.636797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:53.692 [2024-05-15 01:06:47.636818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.692 [2024-05-15 01:06:47.636833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:53.692 [2024-05-15 01:06:47.636853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.692 [2024-05-15 01:06:47.636867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:53.692 [2024-05-15 01:06:47.636888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.692 [2024-05-15 01:06:47.636903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:53.692 [2024-05-15 01:06:47.636953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.692 [2024-05-15 01:06:47.636972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:53.692 [2024-05-15 01:06:47.636995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.692 [2024-05-15 01:06:47.637011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:53.692 [2024-05-15 01:06:47.637032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.692 [2024-05-15 01:06:47.637063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:53.692 [2024-05-15 01:06:47.637682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.692 [2024-05-15 01:06:47.637709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:53.692 [2024-05-15 01:06:47.637737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.692 [2024-05-15 01:06:47.637754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:53.692 [2024-05-15 01:06:47.637776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.692 [2024-05-15 01:06:47.637792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:53.692 [2024-05-15 01:06:47.637813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.692 [2024-05-15 01:06:47.637829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:53.692 [2024-05-15 01:06:47.637850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.692 [2024-05-15 01:06:47.637865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:53.692 [2024-05-15 01:06:47.637886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.692 [2024-05-15 01:06:47.637902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:53.692 [2024-05-15 01:06:47.637947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.692 [2024-05-15 01:06:47.637965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:53.692 [2024-05-15 01:06:47.638003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.692 [2024-05-15 01:06:47.638019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:53.692 [2024-05-15 01:06:47.638040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.692 [2024-05-15 01:06:47.638056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:53.692 [2024-05-15 01:06:47.638077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.692 [2024-05-15 01:06:47.638098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:53.692 [2024-05-15 01:06:47.638121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.692 [2024-05-15 01:06:47.638136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.692 [2024-05-15 01:06:47.638158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.692 [2024-05-15 01:06:47.638174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:53.692 [2024-05-15 01:06:47.638195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.638211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.638254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.638270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.638291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.638306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.638327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.638342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.638363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.638377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.638398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.638413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.638433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.638448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.638468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.638483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.638504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.638519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.638539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.638558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.638579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.638595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.638616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.638631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.638652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.638667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.638687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.638703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.638723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.638738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.638758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.638773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.638794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.638809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.638835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.638850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.638871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.638886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.638922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.638947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.638971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.638987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.639008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.639024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.639050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.639066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.639087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.639103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.639124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.639140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.639162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.639177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.639199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.639214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.639250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.639266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.639288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.639302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.639323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.639338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.639358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.639373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.639394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.652743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.652801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.652820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.652843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.693 [2024-05-15 01:06:47.652859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:53.693 [2024-05-15 01:06:47.652886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.694 [2024-05-15 01:06:47.652901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.652950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:118280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.694 [2024-05-15 01:06:47.652993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.653019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:118288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.694 [2024-05-15 01:06:47.653036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.653058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:118296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.694 [2024-05-15 01:06:47.653074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.653097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:118304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.694 [2024-05-15 01:06:47.653113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.653136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:118312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.694 [2024-05-15 01:06:47.653152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.653174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:118320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.694 [2024-05-15 01:06:47.653191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.653212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.694 [2024-05-15 01:06:47.653244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.653266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:118336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.694 [2024-05-15 01:06:47.653296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.653318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:118344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.694 [2024-05-15 01:06:47.653332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.653353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:118352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.694 [2024-05-15 01:06:47.653367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.653387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:118360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.694 [2024-05-15 01:06:47.653401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.653426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.694 [2024-05-15 01:06:47.653442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.653462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:118376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.694 [2024-05-15 01:06:47.653477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.653497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:117600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.694 [2024-05-15 01:06:47.653512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.653533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:117608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.694 [2024-05-15 01:06:47.653547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.653567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.694 [2024-05-15 01:06:47.653582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.653602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:117624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.694 [2024-05-15 01:06:47.653618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.653639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.694 [2024-05-15 01:06:47.653654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.654477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.694 [2024-05-15 01:06:47.654517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.654550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.694 [2024-05-15 01:06:47.654568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.654590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:117656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.694 [2024-05-15 01:06:47.654606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.654628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.694 [2024-05-15 01:06:47.654643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.654665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.694 [2024-05-15 01:06:47.654681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.654702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.694 [2024-05-15 01:06:47.654723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.654745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.694 [2024-05-15 01:06:47.654761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.654783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.694 [2024-05-15 01:06:47.654799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.654835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:117704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.694 [2024-05-15 01:06:47.654850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.654886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.694 [2024-05-15 01:06:47.654902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.654923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.694 [2024-05-15 01:06:47.654963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.654987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:117728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.694 [2024-05-15 01:06:47.655018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.655042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:117736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.694 [2024-05-15 01:06:47.655059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.655081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:117744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.694 [2024-05-15 01:06:47.655098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.655120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:117752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.694 [2024-05-15 01:06:47.655137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:53.694 [2024-05-15 01:06:47.655159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.695 [2024-05-15 01:06:47.655176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.655198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.695 [2024-05-15 01:06:47.655228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.655251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.695 [2024-05-15 01:06:47.655271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.655308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:117784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.695 [2024-05-15 01:06:47.655324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.655359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.695 [2024-05-15 01:06:47.655375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.655395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.695 [2024-05-15 01:06:47.655411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.655431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.695 [2024-05-15 01:06:47.655446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.655466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.695 [2024-05-15 01:06:47.655480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.655501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.695 [2024-05-15 01:06:47.655516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.655536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.695 [2024-05-15 01:06:47.655551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.655571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.695 [2024-05-15 01:06:47.655585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.655605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.695 [2024-05-15 01:06:47.655620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.655640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.695 [2024-05-15 01:06:47.655655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.655674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.695 [2024-05-15 01:06:47.655689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.655709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.695 [2024-05-15 01:06:47.655724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.655749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.695 [2024-05-15 01:06:47.655764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.655784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.695 [2024-05-15 01:06:47.655799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.655819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.695 [2024-05-15 01:06:47.655834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.655853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.695 [2024-05-15 01:06:47.655868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.655888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.695 [2024-05-15 01:06:47.655903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.655946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.695 [2024-05-15 01:06:47.655964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.656002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.695 [2024-05-15 01:06:47.656018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.656041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.695 [2024-05-15 01:06:47.656058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.656080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.695 [2024-05-15 01:06:47.656096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.656118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.695 [2024-05-15 01:06:47.656134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.656156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.695 [2024-05-15 01:06:47.656172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.656193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.695 [2024-05-15 01:06:47.656225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.656251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.695 [2024-05-15 01:06:47.656266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.656303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.695 [2024-05-15 01:06:47.656318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.656339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.695 [2024-05-15 01:06:47.656355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.656375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.695 [2024-05-15 01:06:47.656389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.656410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.695 [2024-05-15 01:06:47.656425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.656445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.695 [2024-05-15 01:06:47.656459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.656480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.695 [2024-05-15 01:06:47.656494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.656514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.695 [2024-05-15 01:06:47.656529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:53.695 [2024-05-15 01:06:47.656549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.696 [2024-05-15 01:06:47.656564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.656584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.696 [2024-05-15 01:06:47.656599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.656619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.696 [2024-05-15 01:06:47.656634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.656654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.696 [2024-05-15 01:06:47.656669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.656712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.696 [2024-05-15 01:06:47.656729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.656749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.696 [2024-05-15 01:06:47.656765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.656785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.696 [2024-05-15 01:06:47.656801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.656821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.696 [2024-05-15 01:06:47.656837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.656859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:117592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.696 [2024-05-15 01:06:47.656874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.656895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.696 [2024-05-15 01:06:47.656925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.656956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.696 [2024-05-15 01:06:47.656994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.657017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.696 [2024-05-15 01:06:47.657034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.657673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.696 [2024-05-15 01:06:47.657695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.657727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.696 [2024-05-15 01:06:47.657746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.657768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.696 [2024-05-15 01:06:47.657785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.657806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.696 [2024-05-15 01:06:47.657822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.657843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.696 [2024-05-15 01:06:47.657864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.657886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.696 [2024-05-15 01:06:47.657902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.657949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.696 [2024-05-15 01:06:47.657969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.657992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.696 [2024-05-15 01:06:47.658008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.658030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.696 [2024-05-15 01:06:47.658047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.658069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.696 [2024-05-15 01:06:47.658085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.658107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.696 [2024-05-15 01:06:47.658123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.658145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.696 [2024-05-15 01:06:47.658161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.658183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.696 [2024-05-15 01:06:47.658200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.658236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.696 [2024-05-15 01:06:47.658252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.658275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.696 [2024-05-15 01:06:47.658305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.658327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.696 [2024-05-15 01:06:47.658342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.658363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.696 [2024-05-15 01:06:47.658382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.658403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.696 [2024-05-15 01:06:47.658418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.658438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.696 [2024-05-15 01:06:47.658453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.658474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.696 [2024-05-15 01:06:47.658489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.658509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.696 [2024-05-15 01:06:47.658524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:53.696 [2024-05-15 01:06:47.658544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.696 [2024-05-15 01:06:47.658558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:53.697 [2024-05-15 01:06:47.658579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.697 [2024-05-15 01:06:47.658594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:53.697 [2024-05-15 01:06:47.658614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.697 [2024-05-15 01:06:47.658628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:53.697 [2024-05-15 01:06:47.658649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.697 [2024-05-15 01:06:47.658663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:53.697 [2024-05-15 01:06:47.658683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.697 [2024-05-15 01:06:47.658713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:53.697 [2024-05-15 01:06:47.658734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.697 [2024-05-15 01:06:47.658749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:53.697 [2024-05-15 01:06:47.658770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.697 [2024-05-15 01:06:47.658786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:53.697 [2024-05-15 01:06:47.658806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.697 [2024-05-15 01:06:47.658821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:53.697 [2024-05-15 01:06:47.658846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.697 [2024-05-15 01:06:47.658862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:53.697 [2024-05-15 01:06:47.658883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.697 [2024-05-15 01:06:47.658898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:53.697 [2024-05-15 01:06:47.658943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.697 [2024-05-15 01:06:47.658961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:53.697 [2024-05-15 01:06:47.659000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.697 [2024-05-15 01:06:47.659018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:53.697 [2024-05-15 01:06:47.659040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.697 [2024-05-15 01:06:47.659056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:53.697 [2024-05-15 01:06:47.659078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.697 [2024-05-15 01:06:47.659095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:53.697 [2024-05-15 01:06:47.659117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.697 [2024-05-15 01:06:47.659133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:53.697 [2024-05-15 01:06:47.659155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.697 [2024-05-15 01:06:47.659171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:53.697 [2024-05-15 01:06:47.659193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.697 [2024-05-15 01:06:47.659209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:53.697 [2024-05-15 01:06:47.659246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.697 [2024-05-15 01:06:47.659261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:53.697 [2024-05-15 01:06:47.659296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.697 [2024-05-15 01:06:47.659312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:53.697 [2024-05-15 01:06:47.659333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.697 [2024-05-15 01:06:47.659347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:53.697 [2024-05-15 01:06:47.659371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.697 [2024-05-15 01:06:47.659387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:53.697 [2024-05-15 01:06:47.659407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.697 [2024-05-15 01:06:47.659422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:53.697 [2024-05-15 01:06:47.659442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.697 [2024-05-15 01:06:47.659456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:53.697 [2024-05-15 01:06:47.659476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.697 [2024-05-15 01:06:47.659491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:53.697 [2024-05-15 01:06:47.659511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.697 [2024-05-15 01:06:47.659526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:53.697 [2024-05-15 01:06:47.659546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.697 [2024-05-15 01:06:47.659561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:53.697 [2024-05-15 01:06:47.659581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.697 [2024-05-15 01:06:47.659596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:53.697 [2024-05-15 01:06:47.659616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:118280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.697 [2024-05-15 01:06:47.659631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:53.697 [2024-05-15 01:06:47.659651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:118288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.697 [2024-05-15 01:06:47.659665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:53.697 [2024-05-15 01:06:47.659686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:118296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.697 [2024-05-15 01:06:47.659700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:53.697 [2024-05-15 01:06:47.659720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.697 [2024-05-15 01:06:47.659735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.659755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:118312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.698 [2024-05-15 01:06:47.659770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.659790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:118320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.698 [2024-05-15 01:06:47.659808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.659845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.698 [2024-05-15 01:06:47.659861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.659882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:118336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.698 [2024-05-15 01:06:47.659897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.659941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:118344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.698 [2024-05-15 01:06:47.659960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.659983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:118352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.698 [2024-05-15 01:06:47.660000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.660022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:118360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.698 [2024-05-15 01:06:47.660038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.660061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:118368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.698 [2024-05-15 01:06:47.660079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.660102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:118376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.698 [2024-05-15 01:06:47.660118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.660140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:117600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.698 [2024-05-15 01:06:47.660157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.660185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:117608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.698 [2024-05-15 01:06:47.660202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.660240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.698 [2024-05-15 01:06:47.660257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.660279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:117624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.698 [2024-05-15 01:06:47.660295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.661078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.698 [2024-05-15 01:06:47.661107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.661135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:117640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.698 [2024-05-15 01:06:47.661153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.661176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.698 [2024-05-15 01:06:47.661193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.661214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:117656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.698 [2024-05-15 01:06:47.661231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.661253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.698 [2024-05-15 01:06:47.661283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.661306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:117672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.698 [2024-05-15 01:06:47.661321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.661364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:117680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.698 [2024-05-15 01:06:47.661381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.661402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.698 [2024-05-15 01:06:47.661418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.661440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:117696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.698 [2024-05-15 01:06:47.661470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.661493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.698 [2024-05-15 01:06:47.661508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.661529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.698 [2024-05-15 01:06:47.661545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.661566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.698 [2024-05-15 01:06:47.661582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.661602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:117728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.698 [2024-05-15 01:06:47.661637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.661661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.698 [2024-05-15 01:06:47.661678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.661715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.698 [2024-05-15 01:06:47.661730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.661764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:117752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.698 [2024-05-15 01:06:47.661780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.661802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.698 [2024-05-15 01:06:47.661817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.661838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.698 [2024-05-15 01:06:47.661853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.661874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.698 [2024-05-15 01:06:47.661889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.661924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:117784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.698 [2024-05-15 01:06:47.661949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.661974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.698 [2024-05-15 01:06:47.661991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:53.698 [2024-05-15 01:06:47.662013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.698 [2024-05-15 01:06:47.662030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.662067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.699 [2024-05-15 01:06:47.662083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.662105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.699 [2024-05-15 01:06:47.662120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.662142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.699 [2024-05-15 01:06:47.662158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.662184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.699 [2024-05-15 01:06:47.662201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.662238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.699 [2024-05-15 01:06:47.662253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.662274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.699 [2024-05-15 01:06:47.662288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.662308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.699 [2024-05-15 01:06:47.662324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.662344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.699 [2024-05-15 01:06:47.662360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.662380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.699 [2024-05-15 01:06:47.662395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.662415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.699 [2024-05-15 01:06:47.662430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.662451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.699 [2024-05-15 01:06:47.662466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.662486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.699 [2024-05-15 01:06:47.662501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.662521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.699 [2024-05-15 01:06:47.662536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.662556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.699 [2024-05-15 01:06:47.662571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.662591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.699 [2024-05-15 01:06:47.662606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.662630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.699 [2024-05-15 01:06:47.662646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.662666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.699 [2024-05-15 01:06:47.662680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.662701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.699 [2024-05-15 01:06:47.662716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.662736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.699 [2024-05-15 01:06:47.662751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.662771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.699 [2024-05-15 01:06:47.662785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.662805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.699 [2024-05-15 01:06:47.662820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.662840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.699 [2024-05-15 01:06:47.662854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.662874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.699 [2024-05-15 01:06:47.662888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.662924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.699 [2024-05-15 01:06:47.662950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.662974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.699 [2024-05-15 01:06:47.663004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.663027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.699 [2024-05-15 01:06:47.663044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.663066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.699 [2024-05-15 01:06:47.663081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.663107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.699 [2024-05-15 01:06:47.663124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.663146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.699 [2024-05-15 01:06:47.663161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.663183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.699 [2024-05-15 01:06:47.663199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.663237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.699 [2024-05-15 01:06:47.663253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.663288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.699 [2024-05-15 01:06:47.663303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.663324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.699 [2024-05-15 01:06:47.663339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.663359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.699 [2024-05-15 01:06:47.663373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.663393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.699 [2024-05-15 01:06:47.663408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:53.699 [2024-05-15 01:06:47.663428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.699 [2024-05-15 01:06:47.663442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.663462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.700 [2024-05-15 01:06:47.663477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.663497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:117592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.700 [2024-05-15 01:06:47.663527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.663549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.663564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.663585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.663605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.664215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.664257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.664285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.664303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.664340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.664356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.664378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.664393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.664414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.664429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.664450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.664466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.664502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.664517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.664538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.664553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.664573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.664588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.664609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.664623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.664643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.664658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.664677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.664696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.664717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.664732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.664752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.664768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.664788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.664802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.664828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.664844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.664864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.664879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.664899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.664928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.664966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.664998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.665020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.665035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.665057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.665072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.665094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.665109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.665130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.665146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.665167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.665182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.665223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.665239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.665260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.665276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.665311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.665325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.665345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.665360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.665380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.665409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.665431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.665447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:53.700 [2024-05-15 01:06:47.665468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.700 [2024-05-15 01:06:47.665483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.665505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.701 [2024-05-15 01:06:47.665520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.665541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.701 [2024-05-15 01:06:47.665556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.665577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.701 [2024-05-15 01:06:47.665592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.665612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.701 [2024-05-15 01:06:47.665628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.665648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.701 [2024-05-15 01:06:47.665664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.665688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.701 [2024-05-15 01:06:47.665720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.665742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.701 [2024-05-15 01:06:47.665757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.665777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.701 [2024-05-15 01:06:47.665793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.665813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.701 [2024-05-15 01:06:47.665828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.665848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.701 [2024-05-15 01:06:47.665863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.665883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.701 [2024-05-15 01:06:47.665898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.665939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.701 [2024-05-15 01:06:47.665957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.665994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.701 [2024-05-15 01:06:47.666011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.666033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.701 [2024-05-15 01:06:47.666049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.666070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.701 [2024-05-15 01:06:47.666086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.666107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.701 [2024-05-15 01:06:47.666124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.666145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.701 [2024-05-15 01:06:47.666161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.666183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.701 [2024-05-15 01:06:47.666219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.666243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:118280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.701 [2024-05-15 01:06:47.666259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.666294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:118288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.701 [2024-05-15 01:06:47.666310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.666330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:118296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.701 [2024-05-15 01:06:47.666345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.666365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:118304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.701 [2024-05-15 01:06:47.666381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.666402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:118312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.701 [2024-05-15 01:06:47.666416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.666438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:118320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.701 [2024-05-15 01:06:47.666454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.666474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.701 [2024-05-15 01:06:47.666488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.666524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:118336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.701 [2024-05-15 01:06:47.666540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.666561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:118344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.701 [2024-05-15 01:06:47.666576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.666597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.701 [2024-05-15 01:06:47.666612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.666633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:118360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.701 [2024-05-15 01:06:47.666648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.666683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:118368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.701 [2024-05-15 01:06:47.666705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.666728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:118376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.701 [2024-05-15 01:06:47.666745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:53.701 [2024-05-15 01:06:47.666767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.701 [2024-05-15 01:06:47.666783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:53.702 [2024-05-15 01:06:47.666821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:117608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.702 [2024-05-15 01:06:47.666838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:53.702 [2024-05-15 01:06:47.666860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.702 [2024-05-15 01:06:47.666877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:53.702 [2024-05-15 01:06:47.667718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.702 [2024-05-15 01:06:47.667757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:53.702 [2024-05-15 01:06:47.667789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.702 [2024-05-15 01:06:47.667808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:53.702 [2024-05-15 01:06:47.667831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:117640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.702 [2024-05-15 01:06:47.667847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:53.702 [2024-05-15 01:06:47.667869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.702 [2024-05-15 01:06:47.667884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:53.702 [2024-05-15 01:06:47.667906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.702 [2024-05-15 01:06:47.667947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:53.702 [2024-05-15 01:06:47.667973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.702 [2024-05-15 01:06:47.667990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:53.702 [2024-05-15 01:06:47.668012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.702 [2024-05-15 01:06:47.668029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:53.702 [2024-05-15 01:06:47.668052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.702 [2024-05-15 01:06:47.668073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:53.702 [2024-05-15 01:06:47.668096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.702 [2024-05-15 01:06:47.668113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:53.702 [2024-05-15 01:06:47.668135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.702 [2024-05-15 01:06:47.668152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:53.702 [2024-05-15 01:06:47.668174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.702 [2024-05-15 01:06:47.668190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:53.702 [2024-05-15 01:06:47.668212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:117712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.702 [2024-05-15 01:06:47.668242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:53.702 [2024-05-15 01:06:47.668266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:117720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.702 [2024-05-15 01:06:47.668282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:53.702 [2024-05-15 01:06:47.668319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:117728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.702 [2024-05-15 01:06:47.668334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:53.702 [2024-05-15 01:06:47.668356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:117736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.702 [2024-05-15 01:06:47.668371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:53.702 [2024-05-15 01:06:47.668392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.702 [2024-05-15 01:06:47.668423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:53.702 [2024-05-15 01:06:47.668444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.702 [2024-05-15 01:06:47.668459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:53.702 [2024-05-15 01:06:47.668479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.702 [2024-05-15 01:06:47.668495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:53.702 [2024-05-15 01:06:47.668516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.702 [2024-05-15 01:06:47.668530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:53.702 [2024-05-15 01:06:47.668550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.703 [2024-05-15 01:06:47.668565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.668589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.703 [2024-05-15 01:06:47.668605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.668625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.703 [2024-05-15 01:06:47.668640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.668660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.703 [2024-05-15 01:06:47.668674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.668694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.703 [2024-05-15 01:06:47.668709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.668729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.703 [2024-05-15 01:06:47.668744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.668764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.703 [2024-05-15 01:06:47.668779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.668799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.703 [2024-05-15 01:06:47.668814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.668834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.703 [2024-05-15 01:06:47.668848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.668869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.703 [2024-05-15 01:06:47.668884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.668904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.703 [2024-05-15 01:06:47.668941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.668972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.703 [2024-05-15 01:06:47.668990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.669012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.703 [2024-05-15 01:06:47.669028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.669054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.703 [2024-05-15 01:06:47.669070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.669091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.703 [2024-05-15 01:06:47.669107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.669129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.703 [2024-05-15 01:06:47.669144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.669165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.703 [2024-05-15 01:06:47.669182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.669203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.703 [2024-05-15 01:06:47.669233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.669255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.703 [2024-05-15 01:06:47.669270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.669290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.703 [2024-05-15 01:06:47.669305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.669326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.703 [2024-05-15 01:06:47.669341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.669361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.703 [2024-05-15 01:06:47.669375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.669396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.703 [2024-05-15 01:06:47.669411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.669431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.703 [2024-05-15 01:06:47.669445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.669465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.703 [2024-05-15 01:06:47.669480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.669504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.703 [2024-05-15 01:06:47.669520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.669540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.703 [2024-05-15 01:06:47.669555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.669576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.703 [2024-05-15 01:06:47.669591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.669612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.703 [2024-05-15 01:06:47.669627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.669647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.703 [2024-05-15 01:06:47.669662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.669681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.703 [2024-05-15 01:06:47.669696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.669716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.703 [2024-05-15 01:06:47.669731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.669751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.703 [2024-05-15 01:06:47.669766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.669787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.703 [2024-05-15 01:06:47.669802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.669822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.703 [2024-05-15 01:06:47.669837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.669873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.703 [2024-05-15 01:06:47.669888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:53.703 [2024-05-15 01:06:47.669909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.703 [2024-05-15 01:06:47.669948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.669973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.704 [2024-05-15 01:06:47.669994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.670017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.704 [2024-05-15 01:06:47.670033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.670055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.704 [2024-05-15 01:06:47.670072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.670094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:117584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.704 [2024-05-15 01:06:47.670111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.670133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:117592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.704 [2024-05-15 01:06:47.670165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.670188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.704 [2024-05-15 01:06:47.670205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.670830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.704 [2024-05-15 01:06:47.670852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.670881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.704 [2024-05-15 01:06:47.670900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.670945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.704 [2024-05-15 01:06:47.670965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.670987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.704 [2024-05-15 01:06:47.671004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.671027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.704 [2024-05-15 01:06:47.671043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.671065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.704 [2024-05-15 01:06:47.671097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.671120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.704 [2024-05-15 01:06:47.671141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.671163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.704 [2024-05-15 01:06:47.671180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.671201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.704 [2024-05-15 01:06:47.671232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.671254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.704 [2024-05-15 01:06:47.671268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.671289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.704 [2024-05-15 01:06:47.671304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.671324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.704 [2024-05-15 01:06:47.671339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.671359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.704 [2024-05-15 01:06:47.671374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.671394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.704 [2024-05-15 01:06:47.671409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.671429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.704 [2024-05-15 01:06:47.671444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.671464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.704 [2024-05-15 01:06:47.671478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.671499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.704 [2024-05-15 01:06:47.671514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.671534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.704 [2024-05-15 01:06:47.671549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.671569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.704 [2024-05-15 01:06:47.671585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.671609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.704 [2024-05-15 01:06:47.671625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.671645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.704 [2024-05-15 01:06:47.671660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.671680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.704 [2024-05-15 01:06:47.671711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.671732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.704 [2024-05-15 01:06:47.671748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.671769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.704 [2024-05-15 01:06:47.671784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.671805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.704 [2024-05-15 01:06:47.671821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.671842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.704 [2024-05-15 01:06:47.671857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.671878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.704 [2024-05-15 01:06:47.671894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:53.704 [2024-05-15 01:06:47.671936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.704 [2024-05-15 01:06:47.671954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.671992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.672010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.672033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.672064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.672086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.672102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.672127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.672143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.672165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.672181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.672202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.672232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.672254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.672269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.672290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.679324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.679383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.679401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.679422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.679438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.679459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.679474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.679494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.679509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.679530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.679545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.679565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.679579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.679600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.679615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.679635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.679655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.679676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.679691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.679712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.679727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.679747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.679762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.679782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.679797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.679819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.679834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.679854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.679869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.679889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:118280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.679904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.679951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.679970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.679991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:118296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.680007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.680028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:118304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.680044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.680065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:118312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.680081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.680102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:118320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.680123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.680145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.680161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.680182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:118336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.680197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.680233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:118344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.680249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.680270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:118352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.680285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.680305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:118360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.680320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.680340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:118368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.705 [2024-05-15 01:06:47.680355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:53.705 [2024-05-15 01:06:47.680375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:118376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.706 [2024-05-15 01:06:47.680389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.680410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:117600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.706 [2024-05-15 01:06:47.680425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.680445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:117608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.706 [2024-05-15 01:06:47.680461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.681300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.706 [2024-05-15 01:06:47.681329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.681358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:117624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.706 [2024-05-15 01:06:47.681376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.681399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.706 [2024-05-15 01:06:47.681420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.681443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:117640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.706 [2024-05-15 01:06:47.681460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.681481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.706 [2024-05-15 01:06:47.681498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.681536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:117656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.706 [2024-05-15 01:06:47.681553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.681576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:117664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.706 [2024-05-15 01:06:47.681593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.681616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.706 [2024-05-15 01:06:47.681632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.681654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:117680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.706 [2024-05-15 01:06:47.681671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.681693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.706 [2024-05-15 01:06:47.681709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.681731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.706 [2024-05-15 01:06:47.681748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.681770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.706 [2024-05-15 01:06:47.681786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.681823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:117712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.706 [2024-05-15 01:06:47.681839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.681860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.706 [2024-05-15 01:06:47.681876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.681897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.706 [2024-05-15 01:06:47.681928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.681966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:117736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.706 [2024-05-15 01:06:47.681984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.682006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.706 [2024-05-15 01:06:47.682022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.682044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.706 [2024-05-15 01:06:47.682059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.682080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.706 [2024-05-15 01:06:47.682096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.682118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.706 [2024-05-15 01:06:47.682134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.682170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.706 [2024-05-15 01:06:47.682187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.682208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:117784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.706 [2024-05-15 01:06:47.682238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.682259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.706 [2024-05-15 01:06:47.682275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.682296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.706 [2024-05-15 01:06:47.682311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.682331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.706 [2024-05-15 01:06:47.682347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.682368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.706 [2024-05-15 01:06:47.682383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.682404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.706 [2024-05-15 01:06:47.682419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.682443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.706 [2024-05-15 01:06:47.682459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.682479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.706 [2024-05-15 01:06:47.682494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.682514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.706 [2024-05-15 01:06:47.682529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.682550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.706 [2024-05-15 01:06:47.682565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.682585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.706 [2024-05-15 01:06:47.682600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:53.706 [2024-05-15 01:06:47.682620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.706 [2024-05-15 01:06:47.682635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:53.707 [2024-05-15 01:06:47.682655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.707 [2024-05-15 01:06:47.682670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:53.707 [2024-05-15 01:06:47.682691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.707 [2024-05-15 01:06:47.682707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:53.707 [2024-05-15 01:06:47.682727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.707 [2024-05-15 01:06:47.682742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:53.707 [2024-05-15 01:06:47.682762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.707 [2024-05-15 01:06:47.682777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:53.707 [2024-05-15 01:06:47.682797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.707 [2024-05-15 01:06:47.682812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:53.707 [2024-05-15 01:06:47.682832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.707 [2024-05-15 01:06:47.682847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:53.707 [2024-05-15 01:06:47.682867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.707 [2024-05-15 01:06:47.682888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:53.707 [2024-05-15 01:06:47.682925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.707 [2024-05-15 01:06:47.682950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:53.707 [2024-05-15 01:06:47.682973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.707 [2024-05-15 01:06:47.682989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:53.707 [2024-05-15 01:06:47.683010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.707 [2024-05-15 01:06:47.683025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:53.707 [2024-05-15 01:06:47.683046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.707 [2024-05-15 01:06:47.683061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:53.707 [2024-05-15 01:06:47.683082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.707 [2024-05-15 01:06:47.683098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:53.707 [2024-05-15 01:06:47.683119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.707 [2024-05-15 01:06:47.683134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:53.707 [2024-05-15 01:06:47.683156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.707 [2024-05-15 01:06:47.683172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:53.707 [2024-05-15 01:06:47.683193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.707 [2024-05-15 01:06:47.683209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:53.707 [2024-05-15 01:06:47.683230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.707 [2024-05-15 01:06:47.683245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:53.707 [2024-05-15 01:06:47.683281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.707 [2024-05-15 01:06:47.683297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:53.707 [2024-05-15 01:06:47.683318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.707 [2024-05-15 01:06:47.683333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:53.707 [2024-05-15 01:06:47.683353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.707 [2024-05-15 01:06:47.683373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:53.707 [2024-05-15 01:06:47.683394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.707 [2024-05-15 01:06:47.683409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:53.707 [2024-05-15 01:06:47.683429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.707 [2024-05-15 01:06:47.683445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:53.707 [2024-05-15 01:06:47.683466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.707 [2024-05-15 01:06:47.683481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:53.707 [2024-05-15 01:06:47.683502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.707 [2024-05-15 01:06:47.683517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:53.707 [2024-05-15 01:06:47.683538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.707 [2024-05-15 01:06:47.683553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:53.707 [2024-05-15 01:06:47.683573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.708 [2024-05-15 01:06:47.683588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.683608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.708 [2024-05-15 01:06:47.683639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.683661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.708 [2024-05-15 01:06:47.683676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.683697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:117584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.708 [2024-05-15 01:06:47.683713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.683735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.708 [2024-05-15 01:06:47.683750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.684362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.708 [2024-05-15 01:06:47.684385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.684427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.708 [2024-05-15 01:06:47.684450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.684472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.708 [2024-05-15 01:06:47.684488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.684509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.708 [2024-05-15 01:06:47.684524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.684546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.708 [2024-05-15 01:06:47.684561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.684582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.708 [2024-05-15 01:06:47.684598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.684634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.708 [2024-05-15 01:06:47.684650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.684670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.708 [2024-05-15 01:06:47.684685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.684705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.708 [2024-05-15 01:06:47.684719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.684739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.708 [2024-05-15 01:06:47.684754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.684774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.708 [2024-05-15 01:06:47.684789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.684809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.708 [2024-05-15 01:06:47.684823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.684843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.708 [2024-05-15 01:06:47.684858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.684878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.708 [2024-05-15 01:06:47.684893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.684940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.708 [2024-05-15 01:06:47.684959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.684980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.708 [2024-05-15 01:06:47.684996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.685017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.708 [2024-05-15 01:06:47.685032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.685054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.708 [2024-05-15 01:06:47.685070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.685091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.708 [2024-05-15 01:06:47.685106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.685127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.708 [2024-05-15 01:06:47.685142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.685163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.708 [2024-05-15 01:06:47.685178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.685199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.708 [2024-05-15 01:06:47.685229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.685250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.708 [2024-05-15 01:06:47.685265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.685286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.708 [2024-05-15 01:06:47.685301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.685335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.708 [2024-05-15 01:06:47.685352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.685374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.708 [2024-05-15 01:06:47.685390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.685415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.708 [2024-05-15 01:06:47.685431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.685452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.708 [2024-05-15 01:06:47.685468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:53.708 [2024-05-15 01:06:47.685488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.708 [2024-05-15 01:06:47.685503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.685524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.709 [2024-05-15 01:06:47.685539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.685560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.709 [2024-05-15 01:06:47.685575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.685596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.709 [2024-05-15 01:06:47.685611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.685647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.709 [2024-05-15 01:06:47.685663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.685683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.709 [2024-05-15 01:06:47.685698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.685718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.709 [2024-05-15 01:06:47.685733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.685753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.709 [2024-05-15 01:06:47.685768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.685788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.709 [2024-05-15 01:06:47.685803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.685823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.709 [2024-05-15 01:06:47.685838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.685858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.709 [2024-05-15 01:06:47.685876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.685897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.709 [2024-05-15 01:06:47.685927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.685958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.709 [2024-05-15 01:06:47.685975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.685996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.709 [2024-05-15 01:06:47.686011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.686032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.709 [2024-05-15 01:06:47.686047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.686068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.709 [2024-05-15 01:06:47.686083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.686104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.709 [2024-05-15 01:06:47.686119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.686139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.709 [2024-05-15 01:06:47.686155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.686176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.709 [2024-05-15 01:06:47.686191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.686225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.709 [2024-05-15 01:06:47.686241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.686262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.709 [2024-05-15 01:06:47.686278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.686299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.709 [2024-05-15 01:06:47.686314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.686349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.709 [2024-05-15 01:06:47.686369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.686392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:118280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.709 [2024-05-15 01:06:47.686407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.686428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:118288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.709 [2024-05-15 01:06:47.686443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.686464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:118296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.709 [2024-05-15 01:06:47.686479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.686500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:118304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.709 [2024-05-15 01:06:47.686530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.686552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:118312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.709 [2024-05-15 01:06:47.686567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.686606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:118320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.709 [2024-05-15 01:06:47.686622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.686660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.709 [2024-05-15 01:06:47.686676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.686697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.709 [2024-05-15 01:06:47.686713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.686734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:118344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.709 [2024-05-15 01:06:47.686750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:53.709 [2024-05-15 01:06:47.686771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:118352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.710 [2024-05-15 01:06:47.686787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.686808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:118360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.710 [2024-05-15 01:06:47.686824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.686845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:118368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.710 [2024-05-15 01:06:47.686864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.686902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:118376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.710 [2024-05-15 01:06:47.686918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.686946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:117600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.710 [2024-05-15 01:06:47.686964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.687720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.710 [2024-05-15 01:06:47.687743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.687771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.710 [2024-05-15 01:06:47.687789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.687832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:117624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.710 [2024-05-15 01:06:47.687851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.687873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.710 [2024-05-15 01:06:47.687889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.687926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.710 [2024-05-15 01:06:47.687952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.687977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.710 [2024-05-15 01:06:47.687994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.688016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.710 [2024-05-15 01:06:47.688032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.688054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.710 [2024-05-15 01:06:47.688086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.688108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:117672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.710 [2024-05-15 01:06:47.688124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.688145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.710 [2024-05-15 01:06:47.688160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.688204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.710 [2024-05-15 01:06:47.688221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.688259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:117696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.710 [2024-05-15 01:06:47.688276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.688299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:117704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.710 [2024-05-15 01:06:47.688329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.688351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:117712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.710 [2024-05-15 01:06:47.688366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.688387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:117720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.710 [2024-05-15 01:06:47.688402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.688423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.710 [2024-05-15 01:06:47.688438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.688460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.710 [2024-05-15 01:06:47.688475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.688496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:117744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.710 [2024-05-15 01:06:47.688511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.688531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:117752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.710 [2024-05-15 01:06:47.688547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.688567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.710 [2024-05-15 01:06:47.688583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.688603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.710 [2024-05-15 01:06:47.688632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.688653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.710 [2024-05-15 01:06:47.688668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.688709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.710 [2024-05-15 01:06:47.688725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.688746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.710 [2024-05-15 01:06:47.688761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.688782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.710 [2024-05-15 01:06:47.688798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.688819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.710 [2024-05-15 01:06:47.688834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.688855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.710 [2024-05-15 01:06:47.688869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.688890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.710 [2024-05-15 01:06:47.688919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.688951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.710 [2024-05-15 01:06:47.688969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:53.710 [2024-05-15 01:06:47.689006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.711 [2024-05-15 01:06:47.689022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.689043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.711 [2024-05-15 01:06:47.689058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.689094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.711 [2024-05-15 01:06:47.689111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.689133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.711 [2024-05-15 01:06:47.689150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.689172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.711 [2024-05-15 01:06:47.689188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.689224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.711 [2024-05-15 01:06:47.689245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.689267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.711 [2024-05-15 01:06:47.689283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.689304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.711 [2024-05-15 01:06:47.689319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.689340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.711 [2024-05-15 01:06:47.689356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.689391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.711 [2024-05-15 01:06:47.689407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.689427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.711 [2024-05-15 01:06:47.689443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.689463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.711 [2024-05-15 01:06:47.689494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.689515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.711 [2024-05-15 01:06:47.689531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.689551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.711 [2024-05-15 01:06:47.689567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.689587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.711 [2024-05-15 01:06:47.689602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.689623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.711 [2024-05-15 01:06:47.689639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.689659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.711 [2024-05-15 01:06:47.689674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.689695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.711 [2024-05-15 01:06:47.689713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.689735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.711 [2024-05-15 01:06:47.689750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.689787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.711 [2024-05-15 01:06:47.689802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.689822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.711 [2024-05-15 01:06:47.689852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.689874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.711 [2024-05-15 01:06:47.689889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.689910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.711 [2024-05-15 01:06:47.689925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.689969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.711 [2024-05-15 01:06:47.689987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.690008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.711 [2024-05-15 01:06:47.690024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.690045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.711 [2024-05-15 01:06:47.690061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.690083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.711 [2024-05-15 01:06:47.690099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.690120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.711 [2024-05-15 01:06:47.690150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.690172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.711 [2024-05-15 01:06:47.690187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.690208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.711 [2024-05-15 01:06:47.690227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.690263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.711 [2024-05-15 01:06:47.690279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.690314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.711 [2024-05-15 01:06:47.690330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.690352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:117584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.711 [2024-05-15 01:06:47.690368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.690997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:117592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.711 [2024-05-15 01:06:47.691020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.691046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.711 [2024-05-15 01:06:47.691064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.691092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.711 [2024-05-15 01:06:47.691109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:53.711 [2024-05-15 01:06:47.691130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.712 [2024-05-15 01:06:47.691146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:53.712 [2024-05-15 01:06:47.691167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.712 [2024-05-15 01:06:47.691183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:53.712 [2024-05-15 01:06:47.691204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.712 [2024-05-15 01:06:47.691220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:53.712 [2024-05-15 01:06:47.691257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.712 [2024-05-15 01:06:47.691272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:53.712 [2024-05-15 01:06:47.691293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.712 [2024-05-15 01:06:47.691308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:53.712 [2024-05-15 01:06:47.691329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.712 [2024-05-15 01:06:47.691345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:53.712 [2024-05-15 01:06:47.691370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.712 [2024-05-15 01:06:47.691386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:53.712 [2024-05-15 01:06:47.691407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.712 [2024-05-15 01:06:47.691422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:53.712 [2024-05-15 01:06:47.691443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.712 [2024-05-15 01:06:47.691458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:53.712 [2024-05-15 01:06:47.691478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.712 [2024-05-15 01:06:47.691493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:53.712 [2024-05-15 01:06:47.691514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.712 [2024-05-15 01:06:47.691528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:53.712 [2024-05-15 01:06:47.691549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.712 [2024-05-15 01:06:47.691580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:53.712 [2024-05-15 01:06:47.691600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.712 [2024-05-15 01:06:47.691615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.712 [2024-05-15 01:06:47.691635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.712 [2024-05-15 01:06:47.691650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:53.712 [2024-05-15 01:06:47.691670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.712 [2024-05-15 01:06:47.691685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:53.712 [2024-05-15 01:06:47.691706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.712 [2024-05-15 01:06:47.691721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:53.712 [2024-05-15 01:06:47.691741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.712 [2024-05-15 01:06:47.691756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:53.712 [2024-05-15 01:06:47.691776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.712 [2024-05-15 01:06:47.691791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:53.712 [2024-05-15 01:06:47.691815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.712 [2024-05-15 01:06:47.691831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:53.712 [2024-05-15 01:06:47.691852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.712 [2024-05-15 01:06:47.691867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:53.712 [2024-05-15 01:06:47.691887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.712 [2024-05-15 01:06:47.691901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:53.712 [2024-05-15 01:06:47.691947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.712 [2024-05-15 01:06:47.691965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:53.712 [2024-05-15 01:06:47.691986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.712 [2024-05-15 01:06:47.692002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:53.712 [2024-05-15 01:06:47.692023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.712 [2024-05-15 01:06:47.692038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:53.712 [2024-05-15 01:06:47.692059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.712 [2024-05-15 01:06:47.692075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:53.712 [2024-05-15 01:06:47.692096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.712 [2024-05-15 01:06:47.692111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:53.712 [2024-05-15 01:06:47.692132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.712 [2024-05-15 01:06:47.692147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.692168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.692183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.692204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.692234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.692255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.692270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.692291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.692312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.692334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.692349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.692369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.692399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.692422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.692438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.692458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.692474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.692495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.692511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.692531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.692547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.692568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.692584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.692605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.692620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.692641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.692656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.692677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.692707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.692728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.692743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.692763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.692781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.692802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.692818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.692838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.692853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.692873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.692888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.692908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.692947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.692971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.692987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.693009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.693025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.693046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:118280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.693063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.693085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:118288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.693102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.693123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:118296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.693139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.693160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:118304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.693176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.693902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:118312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.693948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.693977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:118320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.694000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.694029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.694047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.694084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:118336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.694101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.694124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:118344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.694155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.694179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:118352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.694195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.694232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:118360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.694248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.694270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:118368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.694286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.694308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:118376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.713 [2024-05-15 01:06:47.694324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:53.713 [2024-05-15 01:06:47.694345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:117600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.714 [2024-05-15 01:06:47.694360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.694382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:117608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.714 [2024-05-15 01:06:47.694398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.694420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.714 [2024-05-15 01:06:47.694450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.694472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:117624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.714 [2024-05-15 01:06:47.694488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.694525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.714 [2024-05-15 01:06:47.694541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.694567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:117640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.714 [2024-05-15 01:06:47.694583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.694604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.714 [2024-05-15 01:06:47.694620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.694640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.714 [2024-05-15 01:06:47.694656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.694676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:117664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.714 [2024-05-15 01:06:47.694692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.694713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.714 [2024-05-15 01:06:47.694728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.694749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.714 [2024-05-15 01:06:47.694764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.694785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.714 [2024-05-15 01:06:47.694815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.694836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:117696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.714 [2024-05-15 01:06:47.694851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.694871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.714 [2024-05-15 01:06:47.694900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.694922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.714 [2024-05-15 01:06:47.694944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.694984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:117720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.714 [2024-05-15 01:06:47.695000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.695037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.714 [2024-05-15 01:06:47.695055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.695087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.714 [2024-05-15 01:06:47.695105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.695127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.714 [2024-05-15 01:06:47.695143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.695165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:117752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.714 [2024-05-15 01:06:47.695181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.695203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.714 [2024-05-15 01:06:47.695219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.695240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.714 [2024-05-15 01:06:47.695256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.695278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.714 [2024-05-15 01:06:47.695294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.695316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.714 [2024-05-15 01:06:47.695332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.695353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.714 [2024-05-15 01:06:47.695370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.695393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.714 [2024-05-15 01:06:47.695409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.695431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.714 [2024-05-15 01:06:47.695447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.695468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.714 [2024-05-15 01:06:47.695484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.695527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.714 [2024-05-15 01:06:47.695542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.695579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.714 [2024-05-15 01:06:47.695599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.695620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.714 [2024-05-15 01:06:47.695651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.695672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.714 [2024-05-15 01:06:47.695686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.695706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.714 [2024-05-15 01:06:47.695735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.695758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.714 [2024-05-15 01:06:47.695773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.695794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.714 [2024-05-15 01:06:47.695809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:53.714 [2024-05-15 01:06:47.695830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.714 [2024-05-15 01:06:47.695845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.695866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.715 [2024-05-15 01:06:47.695881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.695902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.715 [2024-05-15 01:06:47.695938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.695964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.715 [2024-05-15 01:06:47.695980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.696001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.715 [2024-05-15 01:06:47.696032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.696053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.715 [2024-05-15 01:06:47.696069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.696090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.715 [2024-05-15 01:06:47.696109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.696148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.715 [2024-05-15 01:06:47.696164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.696185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.715 [2024-05-15 01:06:47.696201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.696223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.715 [2024-05-15 01:06:47.696238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.696273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.715 [2024-05-15 01:06:47.696289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.696310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.715 [2024-05-15 01:06:47.696326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.696346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.715 [2024-05-15 01:06:47.696361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.696382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.715 [2024-05-15 01:06:47.696398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.696433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.715 [2024-05-15 01:06:47.696449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.696469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.715 [2024-05-15 01:06:47.696484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.696504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.715 [2024-05-15 01:06:47.696533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.696555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.715 [2024-05-15 01:06:47.696571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.696591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.715 [2024-05-15 01:06:47.696610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.696632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.715 [2024-05-15 01:06:47.696647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.696668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.715 [2024-05-15 01:06:47.696684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.696705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.715 [2024-05-15 01:06:47.696720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.696741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.715 [2024-05-15 01:06:47.696756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.696776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.715 [2024-05-15 01:06:47.696791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.696826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.715 [2024-05-15 01:06:47.696842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.696863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.715 [2024-05-15 01:06:47.696877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.696914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.715 [2024-05-15 01:06:47.696938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.697676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:117584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.715 [2024-05-15 01:06:47.697713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.697744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:117592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.715 [2024-05-15 01:06:47.697762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.697783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.715 [2024-05-15 01:06:47.697799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.697820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.715 [2024-05-15 01:06:47.697835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.697861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.715 [2024-05-15 01:06:47.697877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.697898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.715 [2024-05-15 01:06:47.697935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.697962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.715 [2024-05-15 01:06:47.697993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:53.715 [2024-05-15 01:06:47.698015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.715 [2024-05-15 01:06:47.698031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.698051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.698066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.698087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.698102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.698124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.698139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.698160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.698175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.698196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.698211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.698232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.698247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.698268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.698283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.698319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.698334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.698358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.698374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.698394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.698408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.698429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.698444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.698465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.698479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.698499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.698514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.698534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.698549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.698570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.698584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.698604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.698619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.698639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.698654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.698674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.698689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.698709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.698723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.698743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.698758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.698778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.698796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.698817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.698832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.698852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.698866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.698886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.698901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.698921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.698958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.698981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.698996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.699018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.699033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.699077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.699094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.699116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.699131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.699152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.699168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.699189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.699205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.699226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.699258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.699279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.699298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:53.716 [2024-05-15 01:06:47.699319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.716 [2024-05-15 01:06:47.699334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:53.717 [2024-05-15 01:06:47.699370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.717 [2024-05-15 01:06:47.699385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:53.717 [2024-05-15 01:06:47.699405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.717 [2024-05-15 01:06:47.699420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:53.717 [2024-05-15 01:06:47.699440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.717 [2024-05-15 01:06:47.699454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:53.717 [2024-05-15 01:06:47.699474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.717 [2024-05-15 01:06:47.699488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:53.717 [2024-05-15 01:06:47.699508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.717 [2024-05-15 01:06:47.699522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:53.717 [2024-05-15 01:06:47.699543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.717 [2024-05-15 01:06:47.699558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:53.717 [2024-05-15 01:06:47.699578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.717 [2024-05-15 01:06:47.699592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:53.717 [2024-05-15 01:06:47.699612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.717 [2024-05-15 01:06:47.699627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:53.717 [2024-05-15 01:06:47.699647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.717 [2024-05-15 01:06:47.699661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:53.717 [2024-05-15 01:06:47.699686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.717 [2024-05-15 01:06:47.699702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:53.717 [2024-05-15 01:06:47.699722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.717 [2024-05-15 01:06:47.699740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:53.717 [2024-05-15 01:06:47.699760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:118280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.717 [2024-05-15 01:06:47.699776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:53.717 [2024-05-15 01:06:47.699796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:118288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.717 [2024-05-15 01:06:47.699811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:53.717 [2024-05-15 01:06:47.699831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:118296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.717 [2024-05-15 01:06:47.699846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:53.717 [2024-05-15 01:06:47.700484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:118304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.717 [2024-05-15 01:06:47.700507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:53.717 [2024-05-15 01:06:47.700552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:118312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.717 [2024-05-15 01:06:47.700570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:53.717 [2024-05-15 01:06:47.700607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:118320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.717 [2024-05-15 01:06:47.700624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:53.717 [2024-05-15 01:06:47.700645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.717 [2024-05-15 01:06:47.700662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:53.717 [2024-05-15 01:06:47.700685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:118336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.717 [2024-05-15 01:06:47.700700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:53.717 [2024-05-15 01:06:47.700722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:118344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.717 [2024-05-15 01:06:47.700738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:53.717 [2024-05-15 01:06:47.700759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:118352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.717 [2024-05-15 01:06:47.700774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:53.717 [2024-05-15 01:06:47.700795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:118360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.717 [2024-05-15 01:06:47.700811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:53.717 [2024-05-15 01:06:47.700832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:118368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.717 [2024-05-15 01:06:47.700848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:53.717 [2024-05-15 01:06:47.700874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:118376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.717 [2024-05-15 01:06:47.700891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:53.717 [2024-05-15 01:06:47.700936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.717 [2024-05-15 01:06:47.700956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:53.717 [2024-05-15 01:06:47.700981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:117608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.717 [2024-05-15 01:06:47.700998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:53.717 [2024-05-15 01:06:47.701022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.717 [2024-05-15 01:06:47.701038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:53.717 [2024-05-15 01:06:47.701061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:117624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.717 [2024-05-15 01:06:47.701077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:53.717 [2024-05-15 01:06:47.701099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.717 [2024-05-15 01:06:47.701116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.701138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.718 [2024-05-15 01:06:47.701154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.701176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.718 [2024-05-15 01:06:47.701193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.701230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:117656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.718 [2024-05-15 01:06:47.701246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.701267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:117664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.718 [2024-05-15 01:06:47.701282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.701318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.718 [2024-05-15 01:06:47.701333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.701354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.718 [2024-05-15 01:06:47.701369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.701393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.718 [2024-05-15 01:06:47.701409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.701429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.718 [2024-05-15 01:06:47.701458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.701480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:117704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.718 [2024-05-15 01:06:47.701495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.701516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:117712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.718 [2024-05-15 01:06:47.701532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.701552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:117720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.718 [2024-05-15 01:06:47.701567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.701588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:117728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.718 [2024-05-15 01:06:47.701603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.701624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:117736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.718 [2024-05-15 01:06:47.701639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.701660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.718 [2024-05-15 01:06:47.701676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.701696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.718 [2024-05-15 01:06:47.701711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.701732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.718 [2024-05-15 01:06:47.701762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.701783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.718 [2024-05-15 01:06:47.701798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.701832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.718 [2024-05-15 01:06:47.701848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.701870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.718 [2024-05-15 01:06:47.701891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.701928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.718 [2024-05-15 01:06:47.701953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.701976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.718 [2024-05-15 01:06:47.701993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.702014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.718 [2024-05-15 01:06:47.702030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.702052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.718 [2024-05-15 01:06:47.702068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.702089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.718 [2024-05-15 01:06:47.702105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.702141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.718 [2024-05-15 01:06:47.702156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.702178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.718 [2024-05-15 01:06:47.702193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.702229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.718 [2024-05-15 01:06:47.702245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.702265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.718 [2024-05-15 01:06:47.702295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.702317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.718 [2024-05-15 01:06:47.702333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.702355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.718 [2024-05-15 01:06:47.702370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.702391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.718 [2024-05-15 01:06:47.702410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.702432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.718 [2024-05-15 01:06:47.702447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:53.718 [2024-05-15 01:06:47.702468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.718 [2024-05-15 01:06:47.702483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.702504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.719 [2024-05-15 01:06:47.702519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.708160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.719 [2024-05-15 01:06:47.708190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.708214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.719 [2024-05-15 01:06:47.708252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.708274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.719 [2024-05-15 01:06:47.708289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.708309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.719 [2024-05-15 01:06:47.708324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.708344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.719 [2024-05-15 01:06:47.708359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.708379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.719 [2024-05-15 01:06:47.708394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.708413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.719 [2024-05-15 01:06:47.708428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.708448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.719 [2024-05-15 01:06:47.708463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.708483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.719 [2024-05-15 01:06:47.708504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.708526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.719 [2024-05-15 01:06:47.708542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.708563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.719 [2024-05-15 01:06:47.708578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.708597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.719 [2024-05-15 01:06:47.708612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.708632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.719 [2024-05-15 01:06:47.708647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.708667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.719 [2024-05-15 01:06:47.708681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.708702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.719 [2024-05-15 01:06:47.708716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.708737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.719 [2024-05-15 01:06:47.708751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.708771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.719 [2024-05-15 01:06:47.708786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.708806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.719 [2024-05-15 01:06:47.708821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.708841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.719 [2024-05-15 01:06:47.708856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.708876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.719 [2024-05-15 01:06:47.708890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.708925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.719 [2024-05-15 01:06:47.708952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.708996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.719 [2024-05-15 01:06:47.709014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.709309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.719 [2024-05-15 01:06:47.709333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.709394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:117584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.719 [2024-05-15 01:06:47.709430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.709458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.719 [2024-05-15 01:06:47.709474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.709515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.719 [2024-05-15 01:06:47.709532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.709560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.719 [2024-05-15 01:06:47.709592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.709618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.719 [2024-05-15 01:06:47.709634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.709659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.719 [2024-05-15 01:06:47.709675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.709700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.719 [2024-05-15 01:06:47.709716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.709741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.719 [2024-05-15 01:06:47.709758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.709783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.719 [2024-05-15 01:06:47.709812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.709837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.719 [2024-05-15 01:06:47.709852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:53.719 [2024-05-15 01:06:47.709882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.709898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.709946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.709965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.709991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.710007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.710032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.710048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.710073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.710088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.710114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.710129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.710154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.710170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.710195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.710211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.710250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.710267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.710292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.710308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.710332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.710348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.710372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.710388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.710412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.710431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.710456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.710471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.710496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.710511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.710535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.710550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.710575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.710590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.710614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.710630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.710654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.710669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.710693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.710709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.710734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.710749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.710774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.710789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.710813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.710828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.710852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.710868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.710892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.710926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.710965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.710982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.711008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.711024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.711049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.711065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.711090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.711106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.711131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.711146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.711171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.711187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.711227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.711243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.711267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.711283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.711307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.711322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.711346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.720 [2024-05-15 01:06:47.711361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:53.720 [2024-05-15 01:06:47.711386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.721 [2024-05-15 01:06:47.711402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:06:47.711426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.721 [2024-05-15 01:06:47.711441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:06:47.711470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.721 [2024-05-15 01:06:47.711485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:06:47.711510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.721 [2024-05-15 01:06:47.711525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:06:47.711549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.721 [2024-05-15 01:06:47.711565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:06:47.711589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.721 [2024-05-15 01:06:47.711604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:06:47.711629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.721 [2024-05-15 01:06:47.711644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:06:47.711668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.721 [2024-05-15 01:06:47.711684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:06:47.711708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:118280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.721 [2024-05-15 01:06:47.711723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:06:47.711748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:118288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.721 [2024-05-15 01:06:47.711763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:06:47.711902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:118296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.721 [2024-05-15 01:06:47.711944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:07:03.122055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.721 [2024-05-15 01:07:03.122114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:07:03.122561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.721 [2024-05-15 01:07:03.122587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:07:03.122615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.721 [2024-05-15 01:07:03.122633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:07:03.122667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.721 [2024-05-15 01:07:03.122685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:07:03.122707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.721 [2024-05-15 01:07:03.122724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:07:03.122745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:30480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.721 [2024-05-15 01:07:03.122762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:07:03.122784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.721 [2024-05-15 01:07:03.122801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:07:03.122823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.721 [2024-05-15 01:07:03.122840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:07:03.122862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.721 [2024-05-15 01:07:03.122894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:07:03.122917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:29680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.721 [2024-05-15 01:07:03.122940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:07:03.122981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:29704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.721 [2024-05-15 01:07:03.122999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:07:03.123347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:30512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.721 [2024-05-15 01:07:03.123375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:07:03.123405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.721 [2024-05-15 01:07:03.123424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:07:03.123447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:30544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.721 [2024-05-15 01:07:03.123477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:07:03.123499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.721 [2024-05-15 01:07:03.123515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:07:03.123541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.721 [2024-05-15 01:07:03.123557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:07:03.123578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.721 [2024-05-15 01:07:03.123594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:07:03.123614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.721 [2024-05-15 01:07:03.123630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:07:03.123665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:29840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.721 [2024-05-15 01:07:03.123681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:07:03.123701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:30224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.721 [2024-05-15 01:07:03.123716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:07:03.123736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.721 [2024-05-15 01:07:03.123751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:07:03.123771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.721 [2024-05-15 01:07:03.123786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:53.721 [2024-05-15 01:07:03.123806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.721 [2024-05-15 01:07:03.123822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:53.721 Received shutdown signal, test time was about 32.285806 seconds 00:19:53.721 00:19:53.721 Latency(us) 00:19:53.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.721 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:53.721 Verification LBA range: start 0x0 length 0x4000 00:19:53.721 Nvme0n1 : 32.28 6707.42 26.20 0.00 0.00 19025.29 582.54 4125952.38 00:19:53.721 =================================================================================================================== 00:19:53.722 Total : 6707.42 26.20 0.00 0.00 19025.29 582.54 4125952.38 00:19:53.722 01:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:53.980 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:19:53.980 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:53.980 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:19:53.980 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:53.980 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:19:53.980 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:53.980 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:19:53.980 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:53.980 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:53.980 rmmod nvme_tcp 00:19:53.980 rmmod nvme_fabrics 00:19:53.980 rmmod nvme_keyring 00:19:53.980 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:53.980 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:19:53.981 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:19:53.981 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1311236 ']' 00:19:53.981 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1311236 00:19:53.981 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 1311236 ']' 00:19:53.981 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 1311236 00:19:53.981 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:19:53.981 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:53.981 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1311236 00:19:53.981 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:53.981 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:53.981 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1311236' 00:19:53.981 killing process with pid 1311236 00:19:53.981 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 1311236 00:19:53.981 [2024-05-15 01:07:06.286063] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:53.981 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 1311236 00:19:54.239 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:54.239 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:54.239 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:54.239 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:54.239 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:54.239 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.239 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:54.239 01:07:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.769 01:07:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:56.769 00:19:56.769 real 0m42.288s 00:19:56.769 user 1m56.600s 00:19:56.769 sys 0m14.652s 00:19:56.769 01:07:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:56.769 01:07:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:56.769 ************************************ 00:19:56.769 END TEST nvmf_host_multipath_status 00:19:56.769 ************************************ 00:19:56.769 01:07:08 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:56.769 01:07:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:56.769 01:07:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:56.769 01:07:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:56.769 ************************************ 00:19:56.769 START TEST nvmf_discovery_remove_ifc 00:19:56.769 ************************************ 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:56.769 * Looking for test storage... 00:19:56.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:19:56.769 01:07:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:59.299 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:59.299 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:19:59.299 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:59.299 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:59.299 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:59.299 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:59.299 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:59.299 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:19:59.299 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:59.299 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:19:59.299 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:19:59.299 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:19:59.299 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:19:59.299 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:19:59.299 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:19:59.299 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:59.299 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:59.299 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:59.299 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:59.299 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:59.299 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:59.299 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:59.300 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:59.300 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:59.300 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:59.300 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:59.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:59.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:19:59.300 00:19:59.300 --- 10.0.0.2 ping statistics --- 00:19:59.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.300 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:59.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:59.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:19:59.300 00:19:59.300 --- 10.0.0.1 ping statistics --- 00:19:59.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.300 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1318146 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1318146 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 1318146 ']' 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:59.300 [2024-05-15 01:07:11.316650] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:19:59.300 [2024-05-15 01:07:11.316722] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.300 EAL: No free 2048 kB hugepages reported on node 1 00:19:59.300 [2024-05-15 01:07:11.390490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.300 [2024-05-15 01:07:11.496098] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.300 [2024-05-15 01:07:11.496150] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.300 [2024-05-15 01:07:11.496175] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.300 [2024-05-15 01:07:11.496186] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.300 [2024-05-15 01:07:11.496196] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.300 [2024-05-15 01:07:11.496246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:19:59.300 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:59.301 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:59.301 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:59.301 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.301 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:19:59.301 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.301 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:59.301 [2024-05-15 01:07:11.654442] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:59.301 [2024-05-15 01:07:11.662393] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:59.301 [2024-05-15 01:07:11.662652] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:19:59.301 null0 00:19:59.558 [2024-05-15 01:07:11.694558] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:59.558 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.558 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1318175 00:19:59.558 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1318175 /tmp/host.sock 00:19:59.558 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:19:59.558 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 1318175 ']' 00:19:59.558 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:19:59.558 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:59.558 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:59.558 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:59.558 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:59.558 01:07:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:59.558 [2024-05-15 01:07:11.763545] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:19:59.558 [2024-05-15 01:07:11.763635] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1318175 ] 00:19:59.558 EAL: No free 2048 kB hugepages reported on node 1 00:19:59.558 [2024-05-15 01:07:11.837237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.816 [2024-05-15 01:07:11.954692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.381 01:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:00.381 01:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:20:00.381 01:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:00.381 01:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:20:00.381 01:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.381 01:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:00.381 01:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.381 01:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:20:00.381 01:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.381 01:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:00.639 01:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.639 01:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:20:00.639 01:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.639 01:07:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:01.573 [2024-05-15 01:07:13.834977] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:01.573 [2024-05-15 01:07:13.835022] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:01.573 [2024-05-15 01:07:13.835045] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:01.573 [2024-05-15 01:07:13.922332] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:20:01.831 [2024-05-15 01:07:13.985423] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:01.831 [2024-05-15 01:07:13.985495] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:01.831 [2024-05-15 01:07:13.985537] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:01.831 [2024-05-15 01:07:13.985563] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:20:01.831 [2024-05-15 01:07:13.985599] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:01.831 01:07:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.831 01:07:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:20:01.831 01:07:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:01.831 01:07:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:01.831 01:07:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.831 01:07:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:01.831 01:07:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:01.831 01:07:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:01.831 01:07:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:01.831 [2024-05-15 01:07:13.992622] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1803010 was disconnected and freed. delete nvme_qpair. 00:20:01.831 01:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.831 01:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:20:01.831 01:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:20:01.831 01:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:20:01.831 01:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:20:01.831 01:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:01.831 01:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:01.831 01:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:01.831 01:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.831 01:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:01.831 01:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:01.831 01:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:01.831 01:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.831 01:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:01.831 01:07:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:02.774 01:07:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:02.774 01:07:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:02.774 01:07:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:02.774 01:07:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.774 01:07:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:02.774 01:07:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:02.774 01:07:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:02.774 01:07:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.033 01:07:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:03.033 01:07:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:03.967 01:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:03.967 01:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:03.967 01:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:03.967 01:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.967 01:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:03.967 01:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:03.967 01:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:03.967 01:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.967 01:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:03.967 01:07:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:04.901 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:04.901 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:04.901 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:04.901 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.901 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:04.901 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:04.901 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:04.901 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.901 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:04.901 01:07:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:06.274 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:06.275 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:06.275 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.275 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:06.275 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:06.275 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:06.275 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:06.275 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.275 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:06.275 01:07:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:07.208 01:07:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:07.208 01:07:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:07.208 01:07:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:07.208 01:07:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.208 01:07:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:07.208 01:07:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:07.208 01:07:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:07.208 01:07:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.208 01:07:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:07.208 01:07:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:07.208 [2024-05-15 01:07:19.425991] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:20:07.208 [2024-05-15 01:07:19.426050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.208 [2024-05-15 01:07:19.426070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.208 [2024-05-15 01:07:19.426087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.208 [2024-05-15 01:07:19.426099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.208 [2024-05-15 01:07:19.426112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.208 [2024-05-15 01:07:19.426125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.208 [2024-05-15 01:07:19.426144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.208 [2024-05-15 01:07:19.426157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.208 [2024-05-15 01:07:19.426170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.209 [2024-05-15 01:07:19.426182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.209 [2024-05-15 01:07:19.426195] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca380 is same with the state(5) to be set 00:20:07.209 [2024-05-15 01:07:19.436009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ca380 (9): Bad file descriptor 00:20:07.209 [2024-05-15 01:07:19.446067] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.142 01:07:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:08.142 01:07:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:08.142 01:07:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:08.142 01:07:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.142 01:07:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:08.142 01:07:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:08.142 01:07:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:08.142 [2024-05-15 01:07:20.461036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:20:09.515 [2024-05-15 01:07:21.484966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:20:09.515 [2024-05-15 01:07:21.485031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ca380 with addr=10.0.0.2, port=4420 00:20:09.515 [2024-05-15 01:07:21.485060] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca380 is same with the state(5) to be set 00:20:09.515 [2024-05-15 01:07:21.485583] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ca380 (9): Bad file descriptor 00:20:09.515 [2024-05-15 01:07:21.485631] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.515 [2024-05-15 01:07:21.485672] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:20:09.515 [2024-05-15 01:07:21.485714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.515 [2024-05-15 01:07:21.485737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.515 [2024-05-15 01:07:21.485757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.515 [2024-05-15 01:07:21.485772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.515 [2024-05-15 01:07:21.485788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.515 [2024-05-15 01:07:21.485802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.515 [2024-05-15 01:07:21.485817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.515 [2024-05-15 01:07:21.485833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.515 [2024-05-15 01:07:21.485849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.515 [2024-05-15 01:07:21.485873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.515 [2024-05-15 01:07:21.485888] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:20:09.515 [2024-05-15 01:07:21.486103] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c9810 (9): Bad file descriptor 00:20:09.515 [2024-05-15 01:07:21.487121] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:20:09.515 [2024-05-15 01:07:21.487144] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:20:09.515 01:07:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.515 01:07:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:09.515 01:07:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:10.450 01:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:10.450 01:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:10.450 01:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:10.450 01:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.450 01:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:10.450 01:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:10.450 01:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:10.450 01:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.450 01:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:20:10.450 01:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:10.450 01:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:10.450 01:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:20:10.450 01:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:10.450 01:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:10.450 01:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:10.450 01:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.450 01:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:10.450 01:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:10.450 01:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:10.450 01:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.450 01:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:10.450 01:07:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:11.383 [2024-05-15 01:07:23.541046] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:11.383 [2024-05-15 01:07:23.541073] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:11.383 [2024-05-15 01:07:23.541095] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:11.383 01:07:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:11.383 01:07:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:11.383 01:07:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.383 01:07:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:11.383 01:07:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:11.383 01:07:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:11.383 01:07:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:11.383 01:07:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.383 [2024-05-15 01:07:23.667566] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:20:11.383 01:07:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:11.383 01:07:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:11.641 [2024-05-15 01:07:23.850069] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:11.641 [2024-05-15 01:07:23.850120] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:11.641 [2024-05-15 01:07:23.850151] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:11.641 [2024-05-15 01:07:23.850173] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:20:11.641 [2024-05-15 01:07:23.850185] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:11.641 [2024-05-15 01:07:23.859028] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x180d6f0 was disconnected and freed. delete nvme_qpair. 00:20:12.575 01:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:12.575 01:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:12.575 01:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:12.575 01:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.575 01:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:12.575 01:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:12.575 01:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:12.575 01:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.575 01:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:20:12.575 01:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:20:12.575 01:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1318175 00:20:12.575 01:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 1318175 ']' 00:20:12.575 01:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 1318175 00:20:12.575 01:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:20:12.575 01:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:12.575 01:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1318175 00:20:12.575 01:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:12.575 01:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:12.575 01:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1318175' 00:20:12.575 killing process with pid 1318175 00:20:12.575 01:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 1318175 00:20:12.575 01:07:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 1318175 00:20:12.832 01:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:20:12.833 01:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:12.833 01:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:20:12.833 01:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:12.833 01:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:20:12.833 01:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:12.833 01:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:12.833 rmmod nvme_tcp 00:20:12.833 rmmod nvme_fabrics 00:20:12.833 rmmod nvme_keyring 00:20:12.833 01:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:12.833 01:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:20:12.833 01:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:20:12.833 01:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1318146 ']' 00:20:12.833 01:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1318146 00:20:12.833 01:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 1318146 ']' 00:20:12.833 01:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 1318146 00:20:12.833 01:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:20:12.833 01:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:12.833 01:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1318146 00:20:12.833 01:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:12.833 01:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:12.833 01:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1318146' 00:20:12.833 killing process with pid 1318146 00:20:12.833 01:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 1318146 00:20:12.833 [2024-05-15 01:07:25.107922] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:12.833 01:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 1318146 00:20:13.091 01:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:13.091 01:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:13.091 01:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:13.091 01:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:13.091 01:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:13.091 01:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.091 01:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:13.091 01:07:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.621 01:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:15.621 00:20:15.621 real 0m18.733s 00:20:15.621 user 0m26.114s 00:20:15.621 sys 0m3.409s 00:20:15.621 01:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:15.621 01:07:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:15.621 ************************************ 00:20:15.621 END TEST nvmf_discovery_remove_ifc 00:20:15.621 ************************************ 00:20:15.621 01:07:27 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:15.621 01:07:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:15.621 01:07:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:15.621 01:07:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:15.621 ************************************ 00:20:15.621 START TEST nvmf_identify_kernel_target 00:20:15.621 ************************************ 00:20:15.621 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:15.621 * Looking for test storage... 00:20:15.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:15.621 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:15.621 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:20:15.621 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:15.621 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:15.621 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:15.621 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:15.621 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:15.621 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:15.621 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:15.621 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:15.621 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:15.621 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:15.621 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:15.621 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:15.621 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:15.621 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:15.621 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:15.621 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:15.621 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:15.621 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:15.621 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:15.621 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:15.622 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.622 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.622 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.622 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:20:15.622 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.622 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:20:15.622 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:15.622 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:15.622 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:15.622 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:15.622 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:15.622 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:15.622 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:15.622 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:15.622 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:20:15.622 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:15.622 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:15.622 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:15.622 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:15.622 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:15.622 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.622 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.622 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.622 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:15.622 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:15.622 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:20:15.622 01:07:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:18.154 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:18.154 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:18.154 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:18.155 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:18.155 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:18.155 01:07:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:18.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:18.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:20:18.155 00:20:18.155 --- 10.0.0.2 ping statistics --- 00:20:18.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.155 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:18.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:18.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:20:18.155 00:20:18.155 --- 10.0.0.1 ping statistics --- 00:20:18.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.155 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@728 -- # local ip 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:18.155 01:07:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:20:19.090 Waiting for block devices as requested 00:20:19.090 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:20:19.090 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:20:19.347 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:20:19.347 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:20:19.347 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:20:19.347 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:20:19.347 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:20:19.605 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:20:19.605 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:20:19.605 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:20:19.866 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:20:19.866 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:20:19.866 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:20:19.866 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:20:20.167 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:20:20.167 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:20:20.167 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:20:20.167 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:20.167 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:20.167 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:20:20.167 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:20:20.167 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:20.167 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:20:20.167 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:20:20.167 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:20:20.167 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:20:20.426 No valid GPT data, bailing 00:20:20.426 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:20.426 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:20:20.426 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:20:20.426 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:20:20.426 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:20:20.426 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:20.426 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:20.426 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:20.426 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:20.426 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:20:20.426 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:20:20.426 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:20:20.426 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:20:20.426 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:20:20.426 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:20:20.426 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:20:20.426 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:20.426 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:20:20.426 00:20:20.426 Discovery Log Number of Records 2, Generation counter 2 00:20:20.426 =====Discovery Log Entry 0====== 00:20:20.426 trtype: tcp 00:20:20.426 adrfam: ipv4 00:20:20.426 subtype: current discovery subsystem 00:20:20.426 treq: not specified, sq flow control disable supported 00:20:20.426 portid: 1 00:20:20.426 trsvcid: 4420 00:20:20.426 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:20.426 traddr: 10.0.0.1 00:20:20.426 eflags: none 00:20:20.426 sectype: none 00:20:20.426 =====Discovery Log Entry 1====== 00:20:20.426 trtype: tcp 00:20:20.426 adrfam: ipv4 00:20:20.426 subtype: nvme subsystem 00:20:20.426 treq: not specified, sq flow control disable supported 00:20:20.426 portid: 1 00:20:20.426 trsvcid: 4420 00:20:20.426 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:20.426 traddr: 10.0.0.1 00:20:20.426 eflags: none 00:20:20.426 sectype: none 00:20:20.426 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:20:20.427 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:20:20.427 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.427 ===================================================== 00:20:20.427 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:20.427 ===================================================== 00:20:20.427 Controller Capabilities/Features 00:20:20.427 ================================ 00:20:20.427 Vendor ID: 0000 00:20:20.427 Subsystem Vendor ID: 0000 00:20:20.427 Serial Number: 65f74d7567321c954802 00:20:20.427 Model Number: Linux 00:20:20.427 Firmware Version: 6.7.0-68 00:20:20.427 Recommended Arb Burst: 0 00:20:20.427 IEEE OUI Identifier: 00 00 00 00:20:20.427 Multi-path I/O 00:20:20.427 May have multiple subsystem ports: No 00:20:20.427 May have multiple controllers: No 00:20:20.427 Associated with SR-IOV VF: No 00:20:20.427 Max Data Transfer Size: Unlimited 00:20:20.427 Max Number of Namespaces: 0 00:20:20.427 Max Number of I/O Queues: 1024 00:20:20.427 NVMe Specification Version (VS): 1.3 00:20:20.427 NVMe Specification Version (Identify): 1.3 00:20:20.427 Maximum Queue Entries: 1024 00:20:20.427 Contiguous Queues Required: No 00:20:20.427 Arbitration Mechanisms Supported 00:20:20.427 Weighted Round Robin: Not Supported 00:20:20.427 Vendor Specific: Not Supported 00:20:20.427 Reset Timeout: 7500 ms 00:20:20.427 Doorbell Stride: 4 bytes 00:20:20.427 NVM Subsystem Reset: Not Supported 00:20:20.427 Command Sets Supported 00:20:20.427 NVM Command Set: Supported 00:20:20.427 Boot Partition: Not Supported 00:20:20.427 Memory Page Size Minimum: 4096 bytes 00:20:20.427 Memory Page Size Maximum: 4096 bytes 00:20:20.427 Persistent Memory Region: Not Supported 00:20:20.427 Optional Asynchronous Events Supported 00:20:20.427 Namespace Attribute Notices: Not Supported 00:20:20.427 Firmware Activation Notices: Not Supported 00:20:20.427 ANA Change Notices: Not Supported 00:20:20.427 PLE Aggregate Log Change Notices: Not Supported 00:20:20.427 LBA Status Info Alert Notices: Not Supported 00:20:20.427 EGE Aggregate Log Change Notices: Not Supported 00:20:20.427 Normal NVM Subsystem Shutdown event: Not Supported 00:20:20.427 Zone Descriptor Change Notices: Not Supported 00:20:20.427 Discovery Log Change Notices: Supported 00:20:20.427 Controller Attributes 00:20:20.427 128-bit Host Identifier: Not Supported 00:20:20.427 Non-Operational Permissive Mode: Not Supported 00:20:20.427 NVM Sets: Not Supported 00:20:20.427 Read Recovery Levels: Not Supported 00:20:20.427 Endurance Groups: Not Supported 00:20:20.427 Predictable Latency Mode: Not Supported 00:20:20.427 Traffic Based Keep ALive: Not Supported 00:20:20.427 Namespace Granularity: Not Supported 00:20:20.427 SQ Associations: Not Supported 00:20:20.427 UUID List: Not Supported 00:20:20.427 Multi-Domain Subsystem: Not Supported 00:20:20.427 Fixed Capacity Management: Not Supported 00:20:20.427 Variable Capacity Management: Not Supported 00:20:20.427 Delete Endurance Group: Not Supported 00:20:20.427 Delete NVM Set: Not Supported 00:20:20.427 Extended LBA Formats Supported: Not Supported 00:20:20.427 Flexible Data Placement Supported: Not Supported 00:20:20.427 00:20:20.427 Controller Memory Buffer Support 00:20:20.427 ================================ 00:20:20.427 Supported: No 00:20:20.427 00:20:20.427 Persistent Memory Region Support 00:20:20.427 ================================ 00:20:20.427 Supported: No 00:20:20.427 00:20:20.427 Admin Command Set Attributes 00:20:20.427 ============================ 00:20:20.427 Security Send/Receive: Not Supported 00:20:20.427 Format NVM: Not Supported 00:20:20.427 Firmware Activate/Download: Not Supported 00:20:20.427 Namespace Management: Not Supported 00:20:20.427 Device Self-Test: Not Supported 00:20:20.427 Directives: Not Supported 00:20:20.427 NVMe-MI: Not Supported 00:20:20.427 Virtualization Management: Not Supported 00:20:20.427 Doorbell Buffer Config: Not Supported 00:20:20.427 Get LBA Status Capability: Not Supported 00:20:20.427 Command & Feature Lockdown Capability: Not Supported 00:20:20.427 Abort Command Limit: 1 00:20:20.427 Async Event Request Limit: 1 00:20:20.427 Number of Firmware Slots: N/A 00:20:20.427 Firmware Slot 1 Read-Only: N/A 00:20:20.427 Firmware Activation Without Reset: N/A 00:20:20.427 Multiple Update Detection Support: N/A 00:20:20.427 Firmware Update Granularity: No Information Provided 00:20:20.427 Per-Namespace SMART Log: No 00:20:20.427 Asymmetric Namespace Access Log Page: Not Supported 00:20:20.427 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:20.427 Command Effects Log Page: Not Supported 00:20:20.427 Get Log Page Extended Data: Supported 00:20:20.427 Telemetry Log Pages: Not Supported 00:20:20.427 Persistent Event Log Pages: Not Supported 00:20:20.427 Supported Log Pages Log Page: May Support 00:20:20.427 Commands Supported & Effects Log Page: Not Supported 00:20:20.427 Feature Identifiers & Effects Log Page:May Support 00:20:20.427 NVMe-MI Commands & Effects Log Page: May Support 00:20:20.427 Data Area 4 for Telemetry Log: Not Supported 00:20:20.427 Error Log Page Entries Supported: 1 00:20:20.427 Keep Alive: Not Supported 00:20:20.427 00:20:20.427 NVM Command Set Attributes 00:20:20.427 ========================== 00:20:20.427 Submission Queue Entry Size 00:20:20.427 Max: 1 00:20:20.427 Min: 1 00:20:20.427 Completion Queue Entry Size 00:20:20.427 Max: 1 00:20:20.427 Min: 1 00:20:20.427 Number of Namespaces: 0 00:20:20.427 Compare Command: Not Supported 00:20:20.427 Write Uncorrectable Command: Not Supported 00:20:20.427 Dataset Management Command: Not Supported 00:20:20.427 Write Zeroes Command: Not Supported 00:20:20.427 Set Features Save Field: Not Supported 00:20:20.427 Reservations: Not Supported 00:20:20.427 Timestamp: Not Supported 00:20:20.427 Copy: Not Supported 00:20:20.427 Volatile Write Cache: Not Present 00:20:20.427 Atomic Write Unit (Normal): 1 00:20:20.427 Atomic Write Unit (PFail): 1 00:20:20.427 Atomic Compare & Write Unit: 1 00:20:20.427 Fused Compare & Write: Not Supported 00:20:20.427 Scatter-Gather List 00:20:20.427 SGL Command Set: Supported 00:20:20.427 SGL Keyed: Not Supported 00:20:20.427 SGL Bit Bucket Descriptor: Not Supported 00:20:20.427 SGL Metadata Pointer: Not Supported 00:20:20.427 Oversized SGL: Not Supported 00:20:20.427 SGL Metadata Address: Not Supported 00:20:20.427 SGL Offset: Supported 00:20:20.427 Transport SGL Data Block: Not Supported 00:20:20.427 Replay Protected Memory Block: Not Supported 00:20:20.427 00:20:20.427 Firmware Slot Information 00:20:20.427 ========================= 00:20:20.427 Active slot: 0 00:20:20.427 00:20:20.427 00:20:20.427 Error Log 00:20:20.427 ========= 00:20:20.427 00:20:20.427 Active Namespaces 00:20:20.427 ================= 00:20:20.427 Discovery Log Page 00:20:20.427 ================== 00:20:20.427 Generation Counter: 2 00:20:20.427 Number of Records: 2 00:20:20.427 Record Format: 0 00:20:20.427 00:20:20.427 Discovery Log Entry 0 00:20:20.427 ---------------------- 00:20:20.427 Transport Type: 3 (TCP) 00:20:20.427 Address Family: 1 (IPv4) 00:20:20.427 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:20.427 Entry Flags: 00:20:20.427 Duplicate Returned Information: 0 00:20:20.427 Explicit Persistent Connection Support for Discovery: 0 00:20:20.427 Transport Requirements: 00:20:20.427 Secure Channel: Not Specified 00:20:20.427 Port ID: 1 (0x0001) 00:20:20.427 Controller ID: 65535 (0xffff) 00:20:20.427 Admin Max SQ Size: 32 00:20:20.427 Transport Service Identifier: 4420 00:20:20.427 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:20.427 Transport Address: 10.0.0.1 00:20:20.427 Discovery Log Entry 1 00:20:20.427 ---------------------- 00:20:20.427 Transport Type: 3 (TCP) 00:20:20.427 Address Family: 1 (IPv4) 00:20:20.427 Subsystem Type: 2 (NVM Subsystem) 00:20:20.427 Entry Flags: 00:20:20.427 Duplicate Returned Information: 0 00:20:20.427 Explicit Persistent Connection Support for Discovery: 0 00:20:20.427 Transport Requirements: 00:20:20.427 Secure Channel: Not Specified 00:20:20.427 Port ID: 1 (0x0001) 00:20:20.427 Controller ID: 65535 (0xffff) 00:20:20.427 Admin Max SQ Size: 32 00:20:20.427 Transport Service Identifier: 4420 00:20:20.427 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:20:20.427 Transport Address: 10.0.0.1 00:20:20.427 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:20.427 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.427 get_feature(0x01) failed 00:20:20.427 get_feature(0x02) failed 00:20:20.427 get_feature(0x04) failed 00:20:20.427 ===================================================== 00:20:20.427 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:20.427 ===================================================== 00:20:20.427 Controller Capabilities/Features 00:20:20.427 ================================ 00:20:20.427 Vendor ID: 0000 00:20:20.427 Subsystem Vendor ID: 0000 00:20:20.427 Serial Number: 3fbd894687c13808bbf0 00:20:20.427 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:20:20.427 Firmware Version: 6.7.0-68 00:20:20.427 Recommended Arb Burst: 6 00:20:20.427 IEEE OUI Identifier: 00 00 00 00:20:20.427 Multi-path I/O 00:20:20.427 May have multiple subsystem ports: Yes 00:20:20.427 May have multiple controllers: Yes 00:20:20.427 Associated with SR-IOV VF: No 00:20:20.427 Max Data Transfer Size: Unlimited 00:20:20.427 Max Number of Namespaces: 1024 00:20:20.427 Max Number of I/O Queues: 128 00:20:20.427 NVMe Specification Version (VS): 1.3 00:20:20.427 NVMe Specification Version (Identify): 1.3 00:20:20.427 Maximum Queue Entries: 1024 00:20:20.427 Contiguous Queues Required: No 00:20:20.427 Arbitration Mechanisms Supported 00:20:20.427 Weighted Round Robin: Not Supported 00:20:20.427 Vendor Specific: Not Supported 00:20:20.427 Reset Timeout: 7500 ms 00:20:20.427 Doorbell Stride: 4 bytes 00:20:20.427 NVM Subsystem Reset: Not Supported 00:20:20.427 Command Sets Supported 00:20:20.427 NVM Command Set: Supported 00:20:20.427 Boot Partition: Not Supported 00:20:20.427 Memory Page Size Minimum: 4096 bytes 00:20:20.427 Memory Page Size Maximum: 4096 bytes 00:20:20.427 Persistent Memory Region: Not Supported 00:20:20.427 Optional Asynchronous Events Supported 00:20:20.427 Namespace Attribute Notices: Supported 00:20:20.427 Firmware Activation Notices: Not Supported 00:20:20.427 ANA Change Notices: Supported 00:20:20.427 PLE Aggregate Log Change Notices: Not Supported 00:20:20.427 LBA Status Info Alert Notices: Not Supported 00:20:20.427 EGE Aggregate Log Change Notices: Not Supported 00:20:20.427 Normal NVM Subsystem Shutdown event: Not Supported 00:20:20.427 Zone Descriptor Change Notices: Not Supported 00:20:20.427 Discovery Log Change Notices: Not Supported 00:20:20.427 Controller Attributes 00:20:20.427 128-bit Host Identifier: Supported 00:20:20.427 Non-Operational Permissive Mode: Not Supported 00:20:20.427 NVM Sets: Not Supported 00:20:20.427 Read Recovery Levels: Not Supported 00:20:20.427 Endurance Groups: Not Supported 00:20:20.427 Predictable Latency Mode: Not Supported 00:20:20.427 Traffic Based Keep ALive: Supported 00:20:20.427 Namespace Granularity: Not Supported 00:20:20.427 SQ Associations: Not Supported 00:20:20.427 UUID List: Not Supported 00:20:20.427 Multi-Domain Subsystem: Not Supported 00:20:20.427 Fixed Capacity Management: Not Supported 00:20:20.427 Variable Capacity Management: Not Supported 00:20:20.427 Delete Endurance Group: Not Supported 00:20:20.427 Delete NVM Set: Not Supported 00:20:20.427 Extended LBA Formats Supported: Not Supported 00:20:20.427 Flexible Data Placement Supported: Not Supported 00:20:20.427 00:20:20.427 Controller Memory Buffer Support 00:20:20.427 ================================ 00:20:20.427 Supported: No 00:20:20.427 00:20:20.427 Persistent Memory Region Support 00:20:20.427 ================================ 00:20:20.427 Supported: No 00:20:20.427 00:20:20.427 Admin Command Set Attributes 00:20:20.427 ============================ 00:20:20.427 Security Send/Receive: Not Supported 00:20:20.427 Format NVM: Not Supported 00:20:20.427 Firmware Activate/Download: Not Supported 00:20:20.427 Namespace Management: Not Supported 00:20:20.427 Device Self-Test: Not Supported 00:20:20.427 Directives: Not Supported 00:20:20.427 NVMe-MI: Not Supported 00:20:20.427 Virtualization Management: Not Supported 00:20:20.427 Doorbell Buffer Config: Not Supported 00:20:20.427 Get LBA Status Capability: Not Supported 00:20:20.427 Command & Feature Lockdown Capability: Not Supported 00:20:20.427 Abort Command Limit: 4 00:20:20.427 Async Event Request Limit: 4 00:20:20.427 Number of Firmware Slots: N/A 00:20:20.427 Firmware Slot 1 Read-Only: N/A 00:20:20.427 Firmware Activation Without Reset: N/A 00:20:20.427 Multiple Update Detection Support: N/A 00:20:20.427 Firmware Update Granularity: No Information Provided 00:20:20.427 Per-Namespace SMART Log: Yes 00:20:20.427 Asymmetric Namespace Access Log Page: Supported 00:20:20.427 ANA Transition Time : 10 sec 00:20:20.427 00:20:20.427 Asymmetric Namespace Access Capabilities 00:20:20.427 ANA Optimized State : Supported 00:20:20.427 ANA Non-Optimized State : Supported 00:20:20.427 ANA Inaccessible State : Supported 00:20:20.427 ANA Persistent Loss State : Supported 00:20:20.427 ANA Change State : Supported 00:20:20.427 ANAGRPID is not changed : No 00:20:20.427 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:20:20.427 00:20:20.427 ANA Group Identifier Maximum : 128 00:20:20.427 Number of ANA Group Identifiers : 128 00:20:20.427 Max Number of Allowed Namespaces : 1024 00:20:20.427 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:20:20.427 Command Effects Log Page: Supported 00:20:20.427 Get Log Page Extended Data: Supported 00:20:20.427 Telemetry Log Pages: Not Supported 00:20:20.427 Persistent Event Log Pages: Not Supported 00:20:20.427 Supported Log Pages Log Page: May Support 00:20:20.427 Commands Supported & Effects Log Page: Not Supported 00:20:20.427 Feature Identifiers & Effects Log Page:May Support 00:20:20.427 NVMe-MI Commands & Effects Log Page: May Support 00:20:20.427 Data Area 4 for Telemetry Log: Not Supported 00:20:20.427 Error Log Page Entries Supported: 128 00:20:20.427 Keep Alive: Supported 00:20:20.427 Keep Alive Granularity: 1000 ms 00:20:20.427 00:20:20.427 NVM Command Set Attributes 00:20:20.427 ========================== 00:20:20.427 Submission Queue Entry Size 00:20:20.427 Max: 64 00:20:20.427 Min: 64 00:20:20.427 Completion Queue Entry Size 00:20:20.427 Max: 16 00:20:20.427 Min: 16 00:20:20.427 Number of Namespaces: 1024 00:20:20.427 Compare Command: Not Supported 00:20:20.427 Write Uncorrectable Command: Not Supported 00:20:20.427 Dataset Management Command: Supported 00:20:20.427 Write Zeroes Command: Supported 00:20:20.427 Set Features Save Field: Not Supported 00:20:20.427 Reservations: Not Supported 00:20:20.427 Timestamp: Not Supported 00:20:20.427 Copy: Not Supported 00:20:20.427 Volatile Write Cache: Present 00:20:20.427 Atomic Write Unit (Normal): 1 00:20:20.427 Atomic Write Unit (PFail): 1 00:20:20.427 Atomic Compare & Write Unit: 1 00:20:20.427 Fused Compare & Write: Not Supported 00:20:20.427 Scatter-Gather List 00:20:20.427 SGL Command Set: Supported 00:20:20.427 SGL Keyed: Not Supported 00:20:20.427 SGL Bit Bucket Descriptor: Not Supported 00:20:20.427 SGL Metadata Pointer: Not Supported 00:20:20.427 Oversized SGL: Not Supported 00:20:20.427 SGL Metadata Address: Not Supported 00:20:20.427 SGL Offset: Supported 00:20:20.427 Transport SGL Data Block: Not Supported 00:20:20.427 Replay Protected Memory Block: Not Supported 00:20:20.427 00:20:20.427 Firmware Slot Information 00:20:20.427 ========================= 00:20:20.427 Active slot: 0 00:20:20.427 00:20:20.427 Asymmetric Namespace Access 00:20:20.427 =========================== 00:20:20.427 Change Count : 0 00:20:20.427 Number of ANA Group Descriptors : 1 00:20:20.427 ANA Group Descriptor : 0 00:20:20.427 ANA Group ID : 1 00:20:20.427 Number of NSID Values : 1 00:20:20.427 Change Count : 0 00:20:20.427 ANA State : 1 00:20:20.427 Namespace Identifier : 1 00:20:20.427 00:20:20.427 Commands Supported and Effects 00:20:20.427 ============================== 00:20:20.427 Admin Commands 00:20:20.427 -------------- 00:20:20.427 Get Log Page (02h): Supported 00:20:20.427 Identify (06h): Supported 00:20:20.427 Abort (08h): Supported 00:20:20.427 Set Features (09h): Supported 00:20:20.427 Get Features (0Ah): Supported 00:20:20.427 Asynchronous Event Request (0Ch): Supported 00:20:20.427 Keep Alive (18h): Supported 00:20:20.427 I/O Commands 00:20:20.427 ------------ 00:20:20.427 Flush (00h): Supported 00:20:20.427 Write (01h): Supported LBA-Change 00:20:20.427 Read (02h): Supported 00:20:20.427 Write Zeroes (08h): Supported LBA-Change 00:20:20.427 Dataset Management (09h): Supported 00:20:20.427 00:20:20.427 Error Log 00:20:20.427 ========= 00:20:20.427 Entry: 0 00:20:20.427 Error Count: 0x3 00:20:20.427 Submission Queue Id: 0x0 00:20:20.427 Command Id: 0x5 00:20:20.427 Phase Bit: 0 00:20:20.427 Status Code: 0x2 00:20:20.427 Status Code Type: 0x0 00:20:20.427 Do Not Retry: 1 00:20:20.427 Error Location: 0x28 00:20:20.427 LBA: 0x0 00:20:20.427 Namespace: 0x0 00:20:20.427 Vendor Log Page: 0x0 00:20:20.427 ----------- 00:20:20.427 Entry: 1 00:20:20.427 Error Count: 0x2 00:20:20.427 Submission Queue Id: 0x0 00:20:20.427 Command Id: 0x5 00:20:20.427 Phase Bit: 0 00:20:20.427 Status Code: 0x2 00:20:20.427 Status Code Type: 0x0 00:20:20.427 Do Not Retry: 1 00:20:20.427 Error Location: 0x28 00:20:20.427 LBA: 0x0 00:20:20.427 Namespace: 0x0 00:20:20.427 Vendor Log Page: 0x0 00:20:20.427 ----------- 00:20:20.427 Entry: 2 00:20:20.427 Error Count: 0x1 00:20:20.427 Submission Queue Id: 0x0 00:20:20.427 Command Id: 0x4 00:20:20.427 Phase Bit: 0 00:20:20.427 Status Code: 0x2 00:20:20.427 Status Code Type: 0x0 00:20:20.427 Do Not Retry: 1 00:20:20.427 Error Location: 0x28 00:20:20.427 LBA: 0x0 00:20:20.427 Namespace: 0x0 00:20:20.428 Vendor Log Page: 0x0 00:20:20.428 00:20:20.428 Number of Queues 00:20:20.428 ================ 00:20:20.428 Number of I/O Submission Queues: 128 00:20:20.428 Number of I/O Completion Queues: 128 00:20:20.428 00:20:20.428 ZNS Specific Controller Data 00:20:20.428 ============================ 00:20:20.428 Zone Append Size Limit: 0 00:20:20.428 00:20:20.428 00:20:20.428 Active Namespaces 00:20:20.428 ================= 00:20:20.428 get_feature(0x05) failed 00:20:20.428 Namespace ID:1 00:20:20.428 Command Set Identifier: NVM (00h) 00:20:20.428 Deallocate: Supported 00:20:20.428 Deallocated/Unwritten Error: Not Supported 00:20:20.428 Deallocated Read Value: Unknown 00:20:20.428 Deallocate in Write Zeroes: Not Supported 00:20:20.428 Deallocated Guard Field: 0xFFFF 00:20:20.428 Flush: Supported 00:20:20.428 Reservation: Not Supported 00:20:20.428 Namespace Sharing Capabilities: Multiple Controllers 00:20:20.428 Size (in LBAs): 1953525168 (931GiB) 00:20:20.428 Capacity (in LBAs): 1953525168 (931GiB) 00:20:20.428 Utilization (in LBAs): 1953525168 (931GiB) 00:20:20.428 UUID: 3142a506-6be6-4a76-a2d1-cbc21b85f6c6 00:20:20.428 Thin Provisioning: Not Supported 00:20:20.428 Per-NS Atomic Units: Yes 00:20:20.428 Atomic Boundary Size (Normal): 0 00:20:20.428 Atomic Boundary Size (PFail): 0 00:20:20.428 Atomic Boundary Offset: 0 00:20:20.428 NGUID/EUI64 Never Reused: No 00:20:20.428 ANA group ID: 1 00:20:20.428 Namespace Write Protected: No 00:20:20.428 Number of LBA Formats: 1 00:20:20.428 Current LBA Format: LBA Format #00 00:20:20.428 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:20.428 00:20:20.428 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:20:20.428 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:20.428 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:20:20.428 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:20.428 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:20:20.428 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:20.428 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:20.428 rmmod nvme_tcp 00:20:20.428 rmmod nvme_fabrics 00:20:20.428 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:20.428 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:20:20.428 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:20:20.428 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:20.428 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:20.428 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:20.428 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:20.428 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:20.428 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:20.428 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.428 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:20.428 01:07:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.956 01:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:22.956 01:07:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:20:22.956 01:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:22.956 01:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:20:22.956 01:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:22.956 01:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:22.956 01:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:22.956 01:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:22.956 01:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:20:22.956 01:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:20:22.956 01:07:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:20:23.890 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:20:23.890 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:20:23.890 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:20:23.890 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:20:23.890 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:20:23.890 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:20:23.890 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:20:23.890 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:20:23.890 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:20:23.890 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:20:23.890 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:20:23.890 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:20:23.890 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:20:23.890 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:20:23.890 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:20:23.890 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:20:24.825 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:20:25.084 00:20:25.084 real 0m9.791s 00:20:25.084 user 0m2.199s 00:20:25.084 sys 0m3.712s 00:20:25.084 01:07:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:25.084 01:07:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.084 ************************************ 00:20:25.084 END TEST nvmf_identify_kernel_target 00:20:25.084 ************************************ 00:20:25.084 01:07:37 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:25.084 01:07:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:25.084 01:07:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:25.084 01:07:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:25.084 ************************************ 00:20:25.084 START TEST nvmf_auth 00:20:25.084 ************************************ 00:20:25.084 01:07:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:25.084 * Looking for test storage... 00:20:25.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:25.084 01:07:37 nvmf_tcp.nvmf_auth -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:25.084 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@7 -- # uname -s 00:20:25.084 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:25.084 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:25.084 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:25.084 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- paths/export.sh@5 -- # export PATH 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@47 -- # : 0 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- host/auth.sh@21 -- # keys=() 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- host/auth.sh@21 -- # ckeys=() 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- host/auth.sh@81 -- # nvmftestinit 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@285 -- # xtrace_disable 00:20:25.085 01:07:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@291 -- # pci_devs=() 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@295 -- # net_devs=() 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@296 -- # e810=() 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@296 -- # local -ga e810 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@297 -- # x722=() 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@297 -- # local -ga x722 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@298 -- # mlx=() 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@298 -- # local -ga mlx 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:27.616 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:27.616 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:27.616 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:27.616 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:27.617 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # is_hw=yes 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:27.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:27.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:20:27.617 00:20:27.617 --- 10.0.0.2 ping statistics --- 00:20:27.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.617 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:27.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:27.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:20:27.617 00:20:27.617 --- 10.0.0.1 ping statistics --- 00:20:27.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.617 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@422 -- # return 0 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- host/auth.sh@82 -- # nvmfappstart -L nvme_auth 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:27.617 01:07:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:27.617 01:07:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@481 -- # nvmfpid=1326274 00:20:27.617 01:07:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:20:27.617 01:07:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@482 -- # waitforlisten 1326274 00:20:27.617 01:07:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@827 -- # '[' -z 1326274 ']' 00:20:27.617 01:07:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.617 01:07:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:27.617 01:07:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.617 01:07:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:27.617 01:07:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:28.992 01:07:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:28.992 01:07:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@860 -- # return 0 00:20:28.992 01:07:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:28.992 01:07:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:28.993 01:07:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@83 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # gen_key null 32 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=e24123e5af7ff1dce429efab9d9eb9ee 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.9Jb 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key e24123e5af7ff1dce429efab9d9eb9ee 0 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 e24123e5af7ff1dce429efab9d9eb9ee 0 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=e24123e5af7ff1dce429efab9d9eb9ee 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.9Jb 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.9Jb 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # keys[0]=/tmp/spdk.key-null.9Jb 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # gen_key sha512 64 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha512 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=64 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=1d71edd1068ab9e8b0544cb875f9aea0c11504b41d1b532d21d4a2dc5788c420 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha512.XXX 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha512.zP8 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 1d71edd1068ab9e8b0544cb875f9aea0c11504b41d1b532d21d4a2dc5788c420 3 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 1d71edd1068ab9e8b0544cb875f9aea0c11504b41d1b532d21d4a2dc5788c420 3 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=1d71edd1068ab9e8b0544cb875f9aea0c11504b41d1b532d21d4a2dc5788c420 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=3 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha512.zP8 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha512.zP8 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # ckeys[0]=/tmp/spdk.key-sha512.zP8 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # gen_key null 48 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=a3cecf1682c92da1bf4c468c8d280d271b0e955506191c7b 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.fop 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key a3cecf1682c92da1bf4c468c8d280d271b0e955506191c7b 0 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 a3cecf1682c92da1bf4c468c8d280d271b0e955506191c7b 0 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=a3cecf1682c92da1bf4c468c8d280d271b0e955506191c7b 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.fop 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.fop 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # keys[1]=/tmp/spdk.key-null.fop 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # gen_key sha384 48 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha384 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=c266d4b82347c92e7f950061115d918db8515d08ba51ab1e 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha384.XXX 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha384.Okz 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key c266d4b82347c92e7f950061115d918db8515d08ba51ab1e 2 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 c266d4b82347c92e7f950061115d918db8515d08ba51ab1e 2 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=c266d4b82347c92e7f950061115d918db8515d08ba51ab1e 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=2 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha384.Okz 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha384.Okz 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # ckeys[1]=/tmp/spdk.key-sha384.Okz 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # gen_key sha256 32 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha256 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=b12286084ab7cc3da1d00996f68d3c00 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha256.XXX 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha256.Bn1 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key b12286084ab7cc3da1d00996f68d3c00 1 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 b12286084ab7cc3da1d00996f68d3c00 1 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=b12286084ab7cc3da1d00996f68d3c00 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=1 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha256.Bn1 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha256.Bn1 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # keys[2]=/tmp/spdk.key-sha256.Bn1 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # gen_key sha256 32 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha256 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=6043ec88e61963466c073dc4d0b90ad4 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha256.XXX 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha256.Gqz 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 6043ec88e61963466c073dc4d0b90ad4 1 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 6043ec88e61963466c073dc4d0b90ad4 1 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=6043ec88e61963466c073dc4d0b90ad4 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=1 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha256.Gqz 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha256.Gqz 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # ckeys[2]=/tmp/spdk.key-sha256.Gqz 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # gen_key sha384 48 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha384 00:20:28.993 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:20:28.994 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:28.994 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=1e96dea58ce03dda85df63b665b4f7c26ae2d2df8be14439 00:20:28.994 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha384.XXX 00:20:28.994 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha384.SYe 00:20:28.994 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 1e96dea58ce03dda85df63b665b4f7c26ae2d2df8be14439 2 00:20:28.994 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 1e96dea58ce03dda85df63b665b4f7c26ae2d2df8be14439 2 00:20:28.994 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:20:28.994 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:28.994 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=1e96dea58ce03dda85df63b665b4f7c26ae2d2df8be14439 00:20:28.994 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=2 00:20:28.994 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:20:28.994 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha384.SYe 00:20:28.994 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha384.SYe 00:20:28.994 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # keys[3]=/tmp/spdk.key-sha384.SYe 00:20:28.994 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # gen_key null 32 00:20:28.994 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:20:28.994 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:28.994 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:20:28.994 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:20:28.994 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:20:28.994 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:28.994 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=a902daf4831ea584f6cf024c551539ba 00:20:28.994 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:20:28.994 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.YxE 00:20:28.994 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key a902daf4831ea584f6cf024c551539ba 0 00:20:28.994 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 a902daf4831ea584f6cf024c551539ba 0 00:20:28.994 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:20:28.994 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:28.994 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=a902daf4831ea584f6cf024c551539ba 00:20:28.994 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:20:28.994 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:20:29.252 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.YxE 00:20:29.252 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.YxE 00:20:29.252 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # ckeys[3]=/tmp/spdk.key-null.YxE 00:20:29.252 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # gen_key sha512 64 00:20:29.252 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:20:29.252 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:29.252 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:20:29.252 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha512 00:20:29.252 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=64 00:20:29.252 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:29.253 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=169be411bfa13a35a6b5133deea720b446df21aa2d34383d67f51be29ecc7c9d 00:20:29.253 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha512.XXX 00:20:29.253 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha512.qzJ 00:20:29.253 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 169be411bfa13a35a6b5133deea720b446df21aa2d34383d67f51be29ecc7c9d 3 00:20:29.253 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 169be411bfa13a35a6b5133deea720b446df21aa2d34383d67f51be29ecc7c9d 3 00:20:29.253 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:20:29.253 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:29.253 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=169be411bfa13a35a6b5133deea720b446df21aa2d34383d67f51be29ecc7c9d 00:20:29.253 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=3 00:20:29.253 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:20:29.253 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha512.qzJ 00:20:29.253 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha512.qzJ 00:20:29.253 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # keys[4]=/tmp/spdk.key-sha512.qzJ 00:20:29.253 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # ckeys[4]= 00:20:29.253 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@92 -- # waitforlisten 1326274 00:20:29.253 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@827 -- # '[' -z 1326274 ']' 00:20:29.253 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.253 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:29.253 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.253 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:29.253 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@860 -- # return 0 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.9Jb 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha512.zP8 ]] 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zP8 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.fop 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha384.Okz ]] 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Okz 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Bn1 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha256.Gqz ]] 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Gqz 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.SYe 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-null.YxE ]] 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.YxE 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.qzJ 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n '' ]] 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@98 -- # nvmet_auth_init 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@35 -- # get_main_ns_ip 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@639 -- # local block nvme 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@642 -- # modprobe nvmet 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:29.511 01:07:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:20:30.884 Waiting for block devices as requested 00:20:30.884 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:20:30.884 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:20:31.142 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:20:31.142 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:20:31.142 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:20:31.142 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:20:31.399 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:20:31.399 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:20:31.399 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:20:31.399 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:20:31.656 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:20:31.656 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:20:31.656 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:20:31.656 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:20:31.914 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:20:31.914 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:20:31.914 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:20:32.481 No valid GPT data, bailing 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # pt= 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- scripts/common.sh@392 -- # return 1 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@667 -- # echo 1 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@669 -- # echo 1 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@672 -- # echo tcp 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@673 -- # echo 4420 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@674 -- # echo ipv4 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:20:32.481 00:20:32.481 Discovery Log Number of Records 2, Generation counter 2 00:20:32.481 =====Discovery Log Entry 0====== 00:20:32.481 trtype: tcp 00:20:32.481 adrfam: ipv4 00:20:32.481 subtype: current discovery subsystem 00:20:32.481 treq: not specified, sq flow control disable supported 00:20:32.481 portid: 1 00:20:32.481 trsvcid: 4420 00:20:32.481 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:32.481 traddr: 10.0.0.1 00:20:32.481 eflags: none 00:20:32.481 sectype: none 00:20:32.481 =====Discovery Log Entry 1====== 00:20:32.481 trtype: tcp 00:20:32.481 adrfam: ipv4 00:20:32.481 subtype: nvme subsystem 00:20:32.481 treq: not specified, sq flow control disable supported 00:20:32.481 portid: 1 00:20:32.481 trsvcid: 4420 00:20:32.481 subnqn: nqn.2024-02.io.spdk:cnode0 00:20:32.481 traddr: 10.0.0.1 00:20:32.481 eflags: none 00:20:32.481 sectype: none 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@37 -- # echo 0 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@101 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: ]] 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # IFS=, 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@107 -- # printf %s sha256,sha384,sha512 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # IFS=, 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@107 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256,sha384,sha512 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:32.481 nvme0n1 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.481 01:07:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTI0MTIzZTVhZjdmZjFkY2U0MjllZmFiOWQ5ZWI5ZWXDyf9a: 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTI0MTIzZTVhZjdmZjFkY2U0MjllZmFiOWQ5ZWI5ZWXDyf9a: 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: ]] 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 0 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.740 01:07:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:32.740 nvme0n1 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: ]] 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 1 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.740 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:32.999 nvme0n1 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:YjEyMjg2MDg0YWI3Y2MzZGExZDAwOTk2ZjY4ZDNjMDAinTfj: 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:YjEyMjg2MDg0YWI3Y2MzZGExZDAwOTk2ZjY4ZDNjMDAinTfj: 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: ]] 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 2 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:32.999 nvme0n1 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.999 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:33.000 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MWU5NmRlYTU4Y2UwM2RkYTg1ZGY2M2I2NjViNGY3YzI2YWUyZDJkZjhiZTE0NDM556km/g==: 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MWU5NmRlYTU4Y2UwM2RkYTg1ZGY2M2I2NjViNGY3YzI2YWUyZDJkZjhiZTE0NDM556km/g==: 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: ]] 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 3 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.258 nvme0n1 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:MTY5YmU0MTFiZmExM2EzNWE2YjUxMzNkZWVhNzIwYjQ0NmRmMjFhYTJkMzQzODNkNjdmNTFiZTI5ZWNjN2M5ZGOp3HA=: 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:MTY5YmU0MTFiZmExM2EzNWE2YjUxMzNkZWVhNzIwYjQ0NmRmMjFhYTJkMzQzODNkNjdmNTFiZTI5ZWNjN2M5ZGOp3HA=: 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 4 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.258 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.516 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.516 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:33.516 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:33.516 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:33.516 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:33.516 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.516 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.516 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:33.516 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.516 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:33.516 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:33.516 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:33.516 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:33.516 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.516 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.516 nvme0n1 00:20:33.516 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.516 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.516 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.516 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.516 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:33.516 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTI0MTIzZTVhZjdmZjFkY2U0MjllZmFiOWQ5ZWI5ZWXDyf9a: 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTI0MTIzZTVhZjdmZjFkY2U0MjllZmFiOWQ5ZWI5ZWXDyf9a: 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: ]] 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 0 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.517 01:07:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.775 nvme0n1 00:20:33.775 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.775 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.775 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.775 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.775 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: ]] 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 1 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.776 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.034 nvme0n1 00:20:34.034 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.034 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.034 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.034 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.034 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:34.034 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.034 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.034 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.034 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.034 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.034 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.034 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:34.034 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:20:34.034 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:34.034 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:34.034 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:34.034 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:34.034 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:YjEyMjg2MDg0YWI3Y2MzZGExZDAwOTk2ZjY4ZDNjMDAinTfj: 00:20:34.034 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: 00:20:34.034 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:34.034 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:34.034 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:YjEyMjg2MDg0YWI3Y2MzZGExZDAwOTk2ZjY4ZDNjMDAinTfj: 00:20:34.034 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: ]] 00:20:34.034 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: 00:20:34.034 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 2 00:20:34.034 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:34.034 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:34.035 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:20:34.035 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:20:34.035 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:34.035 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:34.035 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.035 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.035 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.035 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:34.035 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:34.035 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:34.035 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:34.035 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.035 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.035 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:34.035 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.035 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:34.035 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:34.035 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:34.035 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.035 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.035 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.293 nvme0n1 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MWU5NmRlYTU4Y2UwM2RkYTg1ZGY2M2I2NjViNGY3YzI2YWUyZDJkZjhiZTE0NDM556km/g==: 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MWU5NmRlYTU4Y2UwM2RkYTg1ZGY2M2I2NjViNGY3YzI2YWUyZDJkZjhiZTE0NDM556km/g==: 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: ]] 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 3 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.293 nvme0n1 00:20:34.293 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.552 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.552 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.552 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:34.552 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.552 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.552 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:MTY5YmU0MTFiZmExM2EzNWE2YjUxMzNkZWVhNzIwYjQ0NmRmMjFhYTJkMzQzODNkNjdmNTFiZTI5ZWNjN2M5ZGOp3HA=: 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:MTY5YmU0MTFiZmExM2EzNWE2YjUxMzNkZWVhNzIwYjQ0NmRmMjFhYTJkMzQzODNkNjdmNTFiZTI5ZWNjN2M5ZGOp3HA=: 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 4 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.553 nvme0n1 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.553 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTI0MTIzZTVhZjdmZjFkY2U0MjllZmFiOWQ5ZWI5ZWXDyf9a: 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTI0MTIzZTVhZjdmZjFkY2U0MjllZmFiOWQ5ZWI5ZWXDyf9a: 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: ]] 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 0 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:34.812 01:07:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.813 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.813 01:07:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.813 nvme0n1 00:20:34.813 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.813 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.813 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.813 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:34.813 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.081 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.081 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.081 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.081 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.081 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.081 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.081 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:35.081 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:20:35.081 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:35.081 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:35.081 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:35.081 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:35.081 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:20:35.081 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:20:35.081 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:35.081 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:35.081 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:20:35.081 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: ]] 00:20:35.081 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:20:35.082 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 1 00:20:35.082 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:35.082 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:35.082 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:20:35.082 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:20:35.082 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:35.082 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:35.082 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.082 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.082 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.082 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:35.082 01:07:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:35.082 01:07:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:35.082 01:07:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:35.082 01:07:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.082 01:07:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.082 01:07:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:35.082 01:07:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.082 01:07:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:35.082 01:07:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:35.082 01:07:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:35.082 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.082 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.082 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.340 nvme0n1 00:20:35.340 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.340 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.340 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.340 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.340 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:35.340 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.340 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.340 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.340 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.340 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.340 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.340 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:35.340 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:20:35.340 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:35.340 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:35.340 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:35.340 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:35.340 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:YjEyMjg2MDg0YWI3Y2MzZGExZDAwOTk2ZjY4ZDNjMDAinTfj: 00:20:35.340 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: 00:20:35.340 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:35.341 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:35.341 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:YjEyMjg2MDg0YWI3Y2MzZGExZDAwOTk2ZjY4ZDNjMDAinTfj: 00:20:35.341 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: ]] 00:20:35.341 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: 00:20:35.341 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 2 00:20:35.341 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:35.341 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:35.341 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:20:35.341 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:20:35.341 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:35.341 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:35.341 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.341 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.341 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.341 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:35.341 01:07:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:35.341 01:07:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:35.341 01:07:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:35.341 01:07:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.341 01:07:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.341 01:07:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:35.341 01:07:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.341 01:07:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:35.341 01:07:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:35.341 01:07:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:35.341 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.341 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.341 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.599 nvme0n1 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MWU5NmRlYTU4Y2UwM2RkYTg1ZGY2M2I2NjViNGY3YzI2YWUyZDJkZjhiZTE0NDM556km/g==: 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MWU5NmRlYTU4Y2UwM2RkYTg1ZGY2M2I2NjViNGY3YzI2YWUyZDJkZjhiZTE0NDM556km/g==: 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: ]] 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 3 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.599 01:07:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.858 nvme0n1 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:MTY5YmU0MTFiZmExM2EzNWE2YjUxMzNkZWVhNzIwYjQ0NmRmMjFhYTJkMzQzODNkNjdmNTFiZTI5ZWNjN2M5ZGOp3HA=: 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:MTY5YmU0MTFiZmExM2EzNWE2YjUxMzNkZWVhNzIwYjQ0NmRmMjFhYTJkMzQzODNkNjdmNTFiZTI5ZWNjN2M5ZGOp3HA=: 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 4 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.858 01:07:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:36.116 nvme0n1 00:20:36.116 01:07:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.116 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.116 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:36.116 01:07:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.116 01:07:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:36.116 01:07:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTI0MTIzZTVhZjdmZjFkY2U0MjllZmFiOWQ5ZWI5ZWXDyf9a: 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTI0MTIzZTVhZjdmZjFkY2U0MjllZmFiOWQ5ZWI5ZWXDyf9a: 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: ]] 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 0 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:36.374 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.375 01:07:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.375 01:07:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:36.633 nvme0n1 00:20:36.633 01:07:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.633 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.633 01:07:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:36.633 01:07:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.633 01:07:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:36.633 01:07:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.633 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.633 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.633 01:07:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.633 01:07:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: ]] 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 1 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.892 01:07:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:37.458 nvme0n1 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:YjEyMjg2MDg0YWI3Y2MzZGExZDAwOTk2ZjY4ZDNjMDAinTfj: 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:YjEyMjg2MDg0YWI3Y2MzZGExZDAwOTk2ZjY4ZDNjMDAinTfj: 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: ]] 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 2 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:37.458 01:07:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:37.459 01:07:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.459 01:07:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.459 01:07:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:38.025 nvme0n1 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MWU5NmRlYTU4Y2UwM2RkYTg1ZGY2M2I2NjViNGY3YzI2YWUyZDJkZjhiZTE0NDM556km/g==: 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MWU5NmRlYTU4Y2UwM2RkYTg1ZGY2M2I2NjViNGY3YzI2YWUyZDJkZjhiZTE0NDM556km/g==: 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: ]] 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 3 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.025 01:07:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:38.591 nvme0n1 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:MTY5YmU0MTFiZmExM2EzNWE2YjUxMzNkZWVhNzIwYjQ0NmRmMjFhYTJkMzQzODNkNjdmNTFiZTI5ZWNjN2M5ZGOp3HA=: 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:MTY5YmU0MTFiZmExM2EzNWE2YjUxMzNkZWVhNzIwYjQ0NmRmMjFhYTJkMzQzODNkNjdmNTFiZTI5ZWNjN2M5ZGOp3HA=: 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 4 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.591 01:07:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:39.156 nvme0n1 00:20:39.156 01:07:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.156 01:07:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.156 01:07:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.156 01:07:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:39.156 01:07:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:39.156 01:07:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.156 01:07:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.156 01:07:51 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:39.156 01:07:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.156 01:07:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:39.156 01:07:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.156 01:07:51 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:20:39.156 01:07:51 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:39.156 01:07:51 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:20:39.156 01:07:51 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:39.156 01:07:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:39.156 01:07:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:39.156 01:07:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:39.156 01:07:51 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTI0MTIzZTVhZjdmZjFkY2U0MjllZmFiOWQ5ZWI5ZWXDyf9a: 00:20:39.156 01:07:51 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: 00:20:39.156 01:07:51 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:39.156 01:07:51 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:39.156 01:07:51 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTI0MTIzZTVhZjdmZjFkY2U0MjllZmFiOWQ5ZWI5ZWXDyf9a: 00:20:39.156 01:07:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: ]] 00:20:39.156 01:07:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: 00:20:39.156 01:07:51 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 0 00:20:39.157 01:07:51 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:39.157 01:07:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:39.157 01:07:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:20:39.157 01:07:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:20:39.157 01:07:51 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:39.157 01:07:51 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:39.157 01:07:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.157 01:07:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:39.157 01:07:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.157 01:07:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:39.157 01:07:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:39.157 01:07:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:39.157 01:07:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:39.157 01:07:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.157 01:07:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.157 01:07:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:39.157 01:07:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:39.157 01:07:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:39.157 01:07:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:39.157 01:07:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:39.157 01:07:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.157 01:07:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.157 01:07:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:40.143 nvme0n1 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: ]] 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 1 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.143 01:07:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:41.074 nvme0n1 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:YjEyMjg2MDg0YWI3Y2MzZGExZDAwOTk2ZjY4ZDNjMDAinTfj: 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:YjEyMjg2MDg0YWI3Y2MzZGExZDAwOTk2ZjY4ZDNjMDAinTfj: 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: ]] 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 2 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.074 01:07:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:42.005 nvme0n1 00:20:42.005 01:07:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.005 01:07:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.005 01:07:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:42.005 01:07:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.005 01:07:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:42.005 01:07:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.005 01:07:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.005 01:07:54 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.005 01:07:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.005 01:07:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:42.005 01:07:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.005 01:07:54 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:42.005 01:07:54 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:42.005 01:07:54 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:42.005 01:07:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:42.005 01:07:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:42.005 01:07:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:42.006 01:07:54 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MWU5NmRlYTU4Y2UwM2RkYTg1ZGY2M2I2NjViNGY3YzI2YWUyZDJkZjhiZTE0NDM556km/g==: 00:20:42.006 01:07:54 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: 00:20:42.006 01:07:54 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:42.006 01:07:54 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:42.006 01:07:54 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MWU5NmRlYTU4Y2UwM2RkYTg1ZGY2M2I2NjViNGY3YzI2YWUyZDJkZjhiZTE0NDM556km/g==: 00:20:42.006 01:07:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: ]] 00:20:42.006 01:07:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: 00:20:42.006 01:07:54 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 3 00:20:42.006 01:07:54 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:42.006 01:07:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:42.006 01:07:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:20:42.006 01:07:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:20:42.006 01:07:54 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:42.006 01:07:54 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:42.006 01:07:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.006 01:07:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:42.006 01:07:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.006 01:07:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:42.006 01:07:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:42.006 01:07:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:42.006 01:07:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:42.006 01:07:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.006 01:07:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.006 01:07:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:42.006 01:07:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.006 01:07:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:42.006 01:07:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:42.006 01:07:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:42.006 01:07:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:42.006 01:07:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.006 01:07:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:42.938 nvme0n1 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:MTY5YmU0MTFiZmExM2EzNWE2YjUxMzNkZWVhNzIwYjQ0NmRmMjFhYTJkMzQzODNkNjdmNTFiZTI5ZWNjN2M5ZGOp3HA=: 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:MTY5YmU0MTFiZmExM2EzNWE2YjUxMzNkZWVhNzIwYjQ0NmRmMjFhYTJkMzQzODNkNjdmNTFiZTI5ZWNjN2M5ZGOp3HA=: 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 4 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.938 01:07:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:43.872 nvme0n1 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTI0MTIzZTVhZjdmZjFkY2U0MjllZmFiOWQ5ZWI5ZWXDyf9a: 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTI0MTIzZTVhZjdmZjFkY2U0MjllZmFiOWQ5ZWI5ZWXDyf9a: 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: ]] 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 0 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.872 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.130 nvme0n1 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: ]] 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 1 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.130 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.389 nvme0n1 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:YjEyMjg2MDg0YWI3Y2MzZGExZDAwOTk2ZjY4ZDNjMDAinTfj: 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:YjEyMjg2MDg0YWI3Y2MzZGExZDAwOTk2ZjY4ZDNjMDAinTfj: 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: ]] 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 2 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.389 nvme0n1 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.389 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MWU5NmRlYTU4Y2UwM2RkYTg1ZGY2M2I2NjViNGY3YzI2YWUyZDJkZjhiZTE0NDM556km/g==: 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MWU5NmRlYTU4Y2UwM2RkYTg1ZGY2M2I2NjViNGY3YzI2YWUyZDJkZjhiZTE0NDM556km/g==: 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: ]] 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 3 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.650 nvme0n1 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:44.650 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:44.651 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:MTY5YmU0MTFiZmExM2EzNWE2YjUxMzNkZWVhNzIwYjQ0NmRmMjFhYTJkMzQzODNkNjdmNTFiZTI5ZWNjN2M5ZGOp3HA=: 00:20:44.651 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:44.651 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:44.651 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:44.651 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:MTY5YmU0MTFiZmExM2EzNWE2YjUxMzNkZWVhNzIwYjQ0NmRmMjFhYTJkMzQzODNkNjdmNTFiZTI5ZWNjN2M5ZGOp3HA=: 00:20:44.651 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:44.651 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 4 00:20:44.651 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:44.651 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:44.651 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:20:44.651 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:20:44.651 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:44.651 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:44.651 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.651 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.651 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.651 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:44.651 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:44.651 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:44.651 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:44.651 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.651 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.651 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:44.651 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.651 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:44.651 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:44.651 01:07:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:44.651 01:07:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:44.651 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.651 01:07:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.908 nvme0n1 00:20:44.908 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.908 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.908 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.908 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.908 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:44.908 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.908 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.908 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTI0MTIzZTVhZjdmZjFkY2U0MjllZmFiOWQ5ZWI5ZWXDyf9a: 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTI0MTIzZTVhZjdmZjFkY2U0MjllZmFiOWQ5ZWI5ZWXDyf9a: 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: ]] 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 0 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.909 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.167 nvme0n1 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: ]] 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 1 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.167 nvme0n1 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.167 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:YjEyMjg2MDg0YWI3Y2MzZGExZDAwOTk2ZjY4ZDNjMDAinTfj: 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:YjEyMjg2MDg0YWI3Y2MzZGExZDAwOTk2ZjY4ZDNjMDAinTfj: 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: ]] 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 2 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.425 nvme0n1 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MWU5NmRlYTU4Y2UwM2RkYTg1ZGY2M2I2NjViNGY3YzI2YWUyZDJkZjhiZTE0NDM556km/g==: 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MWU5NmRlYTU4Y2UwM2RkYTg1ZGY2M2I2NjViNGY3YzI2YWUyZDJkZjhiZTE0NDM556km/g==: 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: ]] 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 3 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.425 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.683 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.683 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:45.683 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:45.683 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:45.683 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:45.683 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.683 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.683 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:45.683 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.683 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:45.683 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:45.683 01:07:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:45.683 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:45.683 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.683 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.683 nvme0n1 00:20:45.683 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.683 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.683 01:07:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:45.683 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.683 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.683 01:07:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:MTY5YmU0MTFiZmExM2EzNWE2YjUxMzNkZWVhNzIwYjQ0NmRmMjFhYTJkMzQzODNkNjdmNTFiZTI5ZWNjN2M5ZGOp3HA=: 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:MTY5YmU0MTFiZmExM2EzNWE2YjUxMzNkZWVhNzIwYjQ0NmRmMjFhYTJkMzQzODNkNjdmNTFiZTI5ZWNjN2M5ZGOp3HA=: 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 4 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.683 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.684 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:45.684 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.684 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:45.684 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:45.684 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:45.684 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:45.684 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.684 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.942 nvme0n1 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTI0MTIzZTVhZjdmZjFkY2U0MjllZmFiOWQ5ZWI5ZWXDyf9a: 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTI0MTIzZTVhZjdmZjFkY2U0MjllZmFiOWQ5ZWI5ZWXDyf9a: 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: ]] 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 0 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.942 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:46.200 nvme0n1 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: ]] 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 1 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:46.200 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:46.201 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.201 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.201 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:46.201 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.201 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:46.201 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:46.201 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:46.201 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.201 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.201 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:46.458 nvme0n1 00:20:46.458 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.458 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.458 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:46.715 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.715 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:46.715 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.715 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.715 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.715 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.715 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:46.715 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.715 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:46.715 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:20:46.715 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.715 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:46.715 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:46.715 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:46.715 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:YjEyMjg2MDg0YWI3Y2MzZGExZDAwOTk2ZjY4ZDNjMDAinTfj: 00:20:46.715 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: 00:20:46.715 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:46.715 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:46.715 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:YjEyMjg2MDg0YWI3Y2MzZGExZDAwOTk2ZjY4ZDNjMDAinTfj: 00:20:46.715 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: ]] 00:20:46.715 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: 00:20:46.715 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 2 00:20:46.715 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:46.715 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:46.716 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:20:46.716 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:20:46.716 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.716 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:46.716 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.716 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:46.716 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.716 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:46.716 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:46.716 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:46.716 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:46.716 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.716 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.716 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:46.716 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.716 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:46.716 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:46.716 01:07:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:46.716 01:07:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.716 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.716 01:07:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:46.974 nvme0n1 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MWU5NmRlYTU4Y2UwM2RkYTg1ZGY2M2I2NjViNGY3YzI2YWUyZDJkZjhiZTE0NDM556km/g==: 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MWU5NmRlYTU4Y2UwM2RkYTg1ZGY2M2I2NjViNGY3YzI2YWUyZDJkZjhiZTE0NDM556km/g==: 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: ]] 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 3 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.974 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:47.232 nvme0n1 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:MTY5YmU0MTFiZmExM2EzNWE2YjUxMzNkZWVhNzIwYjQ0NmRmMjFhYTJkMzQzODNkNjdmNTFiZTI5ZWNjN2M5ZGOp3HA=: 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:MTY5YmU0MTFiZmExM2EzNWE2YjUxMzNkZWVhNzIwYjQ0NmRmMjFhYTJkMzQzODNkNjdmNTFiZTI5ZWNjN2M5ZGOp3HA=: 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 4 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.232 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:47.490 nvme0n1 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTI0MTIzZTVhZjdmZjFkY2U0MjllZmFiOWQ5ZWI5ZWXDyf9a: 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTI0MTIzZTVhZjdmZjFkY2U0MjllZmFiOWQ5ZWI5ZWXDyf9a: 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: ]] 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 0 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:47.490 01:07:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:47.748 01:07:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:47.748 01:07:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.748 01:07:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.748 01:07:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:47.748 01:07:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.748 01:07:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:47.748 01:07:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:47.748 01:07:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:47.748 01:07:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.748 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.748 01:07:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:48.005 nvme0n1 00:20:48.005 01:08:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.005 01:08:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.005 01:08:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.005 01:08:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:48.005 01:08:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: ]] 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 1 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.268 01:08:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:48.877 nvme0n1 00:20:48.877 01:08:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.877 01:08:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.877 01:08:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:48.877 01:08:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.877 01:08:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:48.877 01:08:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:YjEyMjg2MDg0YWI3Y2MzZGExZDAwOTk2ZjY4ZDNjMDAinTfj: 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:YjEyMjg2MDg0YWI3Y2MzZGExZDAwOTk2ZjY4ZDNjMDAinTfj: 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: ]] 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 2 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.877 01:08:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:48.878 01:08:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:48.878 01:08:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:48.878 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.878 01:08:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.878 01:08:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:49.135 nvme0n1 00:20:49.135 01:08:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.135 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.135 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:49.135 01:08:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.135 01:08:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:49.135 01:08:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MWU5NmRlYTU4Y2UwM2RkYTg1ZGY2M2I2NjViNGY3YzI2YWUyZDJkZjhiZTE0NDM556km/g==: 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MWU5NmRlYTU4Y2UwM2RkYTg1ZGY2M2I2NjViNGY3YzI2YWUyZDJkZjhiZTE0NDM556km/g==: 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: ]] 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 3 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.392 01:08:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:49.957 nvme0n1 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:MTY5YmU0MTFiZmExM2EzNWE2YjUxMzNkZWVhNzIwYjQ0NmRmMjFhYTJkMzQzODNkNjdmNTFiZTI5ZWNjN2M5ZGOp3HA=: 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:MTY5YmU0MTFiZmExM2EzNWE2YjUxMzNkZWVhNzIwYjQ0NmRmMjFhYTJkMzQzODNkNjdmNTFiZTI5ZWNjN2M5ZGOp3HA=: 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 4 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.958 01:08:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:50.523 nvme0n1 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTI0MTIzZTVhZjdmZjFkY2U0MjllZmFiOWQ5ZWI5ZWXDyf9a: 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTI0MTIzZTVhZjdmZjFkY2U0MjllZmFiOWQ5ZWI5ZWXDyf9a: 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: ]] 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 0 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.523 01:08:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:51.458 nvme0n1 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: ]] 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 1 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.458 01:08:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:52.390 nvme0n1 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:YjEyMjg2MDg0YWI3Y2MzZGExZDAwOTk2ZjY4ZDNjMDAinTfj: 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:YjEyMjg2MDg0YWI3Y2MzZGExZDAwOTk2ZjY4ZDNjMDAinTfj: 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: ]] 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 2 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.390 01:08:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:53.321 nvme0n1 00:20:53.321 01:08:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.321 01:08:05 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.321 01:08:05 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:53.321 01:08:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.321 01:08:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:53.321 01:08:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.321 01:08:05 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.321 01:08:05 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.321 01:08:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.321 01:08:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:53.321 01:08:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.321 01:08:05 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:53.321 01:08:05 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:20:53.321 01:08:05 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.321 01:08:05 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:53.321 01:08:05 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:53.321 01:08:05 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:53.321 01:08:05 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MWU5NmRlYTU4Y2UwM2RkYTg1ZGY2M2I2NjViNGY3YzI2YWUyZDJkZjhiZTE0NDM556km/g==: 00:20:53.321 01:08:05 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: 00:20:53.321 01:08:05 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:53.322 01:08:05 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:53.322 01:08:05 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MWU5NmRlYTU4Y2UwM2RkYTg1ZGY2M2I2NjViNGY3YzI2YWUyZDJkZjhiZTE0NDM556km/g==: 00:20:53.322 01:08:05 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: ]] 00:20:53.322 01:08:05 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: 00:20:53.322 01:08:05 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 3 00:20:53.322 01:08:05 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:53.322 01:08:05 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:53.322 01:08:05 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:20:53.322 01:08:05 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:20:53.322 01:08:05 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.322 01:08:05 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:53.322 01:08:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.322 01:08:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:53.322 01:08:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.322 01:08:05 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:53.322 01:08:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:53.322 01:08:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:53.322 01:08:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:53.322 01:08:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.322 01:08:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.322 01:08:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:53.322 01:08:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.322 01:08:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:53.322 01:08:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:53.322 01:08:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:53.322 01:08:05 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:53.322 01:08:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.322 01:08:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:54.256 nvme0n1 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:MTY5YmU0MTFiZmExM2EzNWE2YjUxMzNkZWVhNzIwYjQ0NmRmMjFhYTJkMzQzODNkNjdmNTFiZTI5ZWNjN2M5ZGOp3HA=: 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:MTY5YmU0MTFiZmExM2EzNWE2YjUxMzNkZWVhNzIwYjQ0NmRmMjFhYTJkMzQzODNkNjdmNTFiZTI5ZWNjN2M5ZGOp3HA=: 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 4 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.256 01:08:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:55.189 nvme0n1 00:20:55.189 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.189 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.189 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:55.189 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.189 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:55.189 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.189 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.189 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.189 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.189 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:55.189 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.189 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:20:55.189 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:20:55.189 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:55.189 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:20:55.189 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.189 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:55.189 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:55.189 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:55.189 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTI0MTIzZTVhZjdmZjFkY2U0MjllZmFiOWQ5ZWI5ZWXDyf9a: 00:20:55.189 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: 00:20:55.189 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:55.189 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:55.189 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTI0MTIzZTVhZjdmZjFkY2U0MjllZmFiOWQ5ZWI5ZWXDyf9a: 00:20:55.190 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: ]] 00:20:55.190 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: 00:20:55.190 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 0 00:20:55.190 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:55.190 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:55.190 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:20:55.190 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:20:55.190 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.190 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:55.190 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.190 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:55.190 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.190 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:55.190 01:08:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:55.190 01:08:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:55.190 01:08:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:55.190 01:08:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.190 01:08:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.190 01:08:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:55.190 01:08:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.190 01:08:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:55.190 01:08:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:55.190 01:08:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:55.190 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.190 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.190 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:55.447 nvme0n1 00:20:55.447 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.447 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.447 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.447 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:55.447 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:55.447 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.447 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.447 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.447 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.447 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:55.447 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.447 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:55.447 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:20:55.447 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.447 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:55.447 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:55.447 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:55.447 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:20:55.447 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:20:55.447 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:55.447 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:55.447 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: ]] 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 1 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:55.448 nvme0n1 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:55.448 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:YjEyMjg2MDg0YWI3Y2MzZGExZDAwOTk2ZjY4ZDNjMDAinTfj: 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:YjEyMjg2MDg0YWI3Y2MzZGExZDAwOTk2ZjY4ZDNjMDAinTfj: 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: ]] 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 2 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.706 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:55.707 nvme0n1 00:20:55.707 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.707 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.707 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.707 01:08:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:55.707 01:08:07 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MWU5NmRlYTU4Y2UwM2RkYTg1ZGY2M2I2NjViNGY3YzI2YWUyZDJkZjhiZTE0NDM556km/g==: 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MWU5NmRlYTU4Y2UwM2RkYTg1ZGY2M2I2NjViNGY3YzI2YWUyZDJkZjhiZTE0NDM556km/g==: 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: ]] 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 3 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.707 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:55.965 nvme0n1 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:MTY5YmU0MTFiZmExM2EzNWE2YjUxMzNkZWVhNzIwYjQ0NmRmMjFhYTJkMzQzODNkNjdmNTFiZTI5ZWNjN2M5ZGOp3HA=: 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:MTY5YmU0MTFiZmExM2EzNWE2YjUxMzNkZWVhNzIwYjQ0NmRmMjFhYTJkMzQzODNkNjdmNTFiZTI5ZWNjN2M5ZGOp3HA=: 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 4 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.965 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.223 nvme0n1 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTI0MTIzZTVhZjdmZjFkY2U0MjllZmFiOWQ5ZWI5ZWXDyf9a: 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTI0MTIzZTVhZjdmZjFkY2U0MjllZmFiOWQ5ZWI5ZWXDyf9a: 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: ]] 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 0 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.223 nvme0n1 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.223 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: ]] 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 1 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:56.481 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.482 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.482 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:56.482 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.482 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:56.482 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:56.482 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:56.482 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.482 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.482 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.482 nvme0n1 00:20:56.482 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.482 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.482 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.482 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:56.482 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.482 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.482 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.482 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.482 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.482 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:YjEyMjg2MDg0YWI3Y2MzZGExZDAwOTk2ZjY4ZDNjMDAinTfj: 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:YjEyMjg2MDg0YWI3Y2MzZGExZDAwOTk2ZjY4ZDNjMDAinTfj: 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: ]] 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 2 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.740 01:08:08 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.740 nvme0n1 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MWU5NmRlYTU4Y2UwM2RkYTg1ZGY2M2I2NjViNGY3YzI2YWUyZDJkZjhiZTE0NDM556km/g==: 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MWU5NmRlYTU4Y2UwM2RkYTg1ZGY2M2I2NjViNGY3YzI2YWUyZDJkZjhiZTE0NDM556km/g==: 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: ]] 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 3 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.740 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.998 nvme0n1 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:MTY5YmU0MTFiZmExM2EzNWE2YjUxMzNkZWVhNzIwYjQ0NmRmMjFhYTJkMzQzODNkNjdmNTFiZTI5ZWNjN2M5ZGOp3HA=: 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:MTY5YmU0MTFiZmExM2EzNWE2YjUxMzNkZWVhNzIwYjQ0NmRmMjFhYTJkMzQzODNkNjdmNTFiZTI5ZWNjN2M5ZGOp3HA=: 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 4 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.998 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:56.999 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.999 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:56.999 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:56.999 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:56.999 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:56.999 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.999 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.999 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:56.999 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.999 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:56.999 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:56.999 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:56.999 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:56.999 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.999 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:57.256 nvme0n1 00:20:57.256 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.256 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.256 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.256 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:57.256 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:57.256 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.256 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.256 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.256 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.256 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:57.256 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.256 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:20:57.256 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:57.256 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:20:57.256 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.256 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:57.256 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:57.256 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:57.256 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTI0MTIzZTVhZjdmZjFkY2U0MjllZmFiOWQ5ZWI5ZWXDyf9a: 00:20:57.256 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: 00:20:57.256 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:57.256 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:57.256 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTI0MTIzZTVhZjdmZjFkY2U0MjllZmFiOWQ5ZWI5ZWXDyf9a: 00:20:57.256 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: ]] 00:20:57.257 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: 00:20:57.257 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 0 00:20:57.257 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:57.257 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:57.257 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:20:57.257 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:20:57.257 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.257 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:57.257 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.257 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:57.257 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.257 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:57.257 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:57.257 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:57.257 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:57.257 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.257 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.257 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:57.257 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.257 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:57.257 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:57.257 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:57.257 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.257 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.257 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:57.515 nvme0n1 00:20:57.515 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.515 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.515 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:57.515 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.515 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:57.515 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.515 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.515 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.515 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.515 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:57.515 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.515 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:57.515 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: ]] 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 1 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.516 01:08:09 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:57.773 nvme0n1 00:20:57.773 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.773 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.773 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:57.773 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.773 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:57.774 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.032 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.032 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:YjEyMjg2MDg0YWI3Y2MzZGExZDAwOTk2ZjY4ZDNjMDAinTfj: 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:YjEyMjg2MDg0YWI3Y2MzZGExZDAwOTk2ZjY4ZDNjMDAinTfj: 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: ]] 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 2 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.033 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:58.292 nvme0n1 00:20:58.292 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MWU5NmRlYTU4Y2UwM2RkYTg1ZGY2M2I2NjViNGY3YzI2YWUyZDJkZjhiZTE0NDM556km/g==: 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MWU5NmRlYTU4Y2UwM2RkYTg1ZGY2M2I2NjViNGY3YzI2YWUyZDJkZjhiZTE0NDM556km/g==: 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: ]] 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 3 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.293 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:58.551 nvme0n1 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:MTY5YmU0MTFiZmExM2EzNWE2YjUxMzNkZWVhNzIwYjQ0NmRmMjFhYTJkMzQzODNkNjdmNTFiZTI5ZWNjN2M5ZGOp3HA=: 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:MTY5YmU0MTFiZmExM2EzNWE2YjUxMzNkZWVhNzIwYjQ0NmRmMjFhYTJkMzQzODNkNjdmNTFiZTI5ZWNjN2M5ZGOp3HA=: 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 4 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.551 01:08:10 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:58.809 nvme0n1 00:20:58.809 01:08:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.809 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.809 01:08:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.809 01:08:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:58.809 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:58.809 01:08:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.809 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.809 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.809 01:08:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.809 01:08:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:58.809 01:08:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.809 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:20:58.809 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:58.809 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:20:58.809 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTI0MTIzZTVhZjdmZjFkY2U0MjllZmFiOWQ5ZWI5ZWXDyf9a: 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTI0MTIzZTVhZjdmZjFkY2U0MjllZmFiOWQ5ZWI5ZWXDyf9a: 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: ]] 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 0 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.810 01:08:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:59.374 nvme0n1 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: ]] 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 1 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.374 01:08:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:59.375 01:08:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:59.375 01:08:11 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:59.375 01:08:11 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.375 01:08:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.375 01:08:11 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:59.939 nvme0n1 00:20:59.939 01:08:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.939 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.939 01:08:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.939 01:08:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:59.939 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:59.939 01:08:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.939 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.939 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.939 01:08:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.939 01:08:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:59.939 01:08:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.939 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:59.939 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:20:59.939 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.939 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:59.939 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:59.939 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:59.939 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:YjEyMjg2MDg0YWI3Y2MzZGExZDAwOTk2ZjY4ZDNjMDAinTfj: 00:20:59.939 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: 00:20:59.939 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:59.939 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:59.939 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:YjEyMjg2MDg0YWI3Y2MzZGExZDAwOTk2ZjY4ZDNjMDAinTfj: 00:20:59.939 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: ]] 00:20:59.939 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: 00:20:59.939 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 2 00:20:59.940 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:59.940 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:59.940 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:20:59.940 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:20:59.940 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.940 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:59.940 01:08:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.940 01:08:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:59.940 01:08:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.940 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:59.940 01:08:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:59.940 01:08:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:59.940 01:08:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:59.940 01:08:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.940 01:08:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.940 01:08:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:59.940 01:08:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.940 01:08:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:59.940 01:08:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:59.940 01:08:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:59.940 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.940 01:08:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.940 01:08:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:00.505 nvme0n1 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MWU5NmRlYTU4Y2UwM2RkYTg1ZGY2M2I2NjViNGY3YzI2YWUyZDJkZjhiZTE0NDM556km/g==: 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MWU5NmRlYTU4Y2UwM2RkYTg1ZGY2M2I2NjViNGY3YzI2YWUyZDJkZjhiZTE0NDM556km/g==: 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: ]] 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 3 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.505 01:08:12 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:01.070 nvme0n1 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:MTY5YmU0MTFiZmExM2EzNWE2YjUxMzNkZWVhNzIwYjQ0NmRmMjFhYTJkMzQzODNkNjdmNTFiZTI5ZWNjN2M5ZGOp3HA=: 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:MTY5YmU0MTFiZmExM2EzNWE2YjUxMzNkZWVhNzIwYjQ0NmRmMjFhYTJkMzQzODNkNjdmNTFiZTI5ZWNjN2M5ZGOp3HA=: 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 4 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:01.070 01:08:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.327 01:08:13 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:01.327 01:08:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:01.327 01:08:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:01.327 01:08:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:01.327 01:08:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.327 01:08:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.327 01:08:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:01.327 01:08:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.327 01:08:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:01.327 01:08:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:01.327 01:08:13 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:01.327 01:08:13 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:01.327 01:08:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.327 01:08:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:01.584 nvme0n1 00:21:01.584 01:08:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.584 01:08:13 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.584 01:08:13 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:01.584 01:08:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.584 01:08:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:01.890 01:08:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:ZTI0MTIzZTVhZjdmZjFkY2U0MjllZmFiOWQ5ZWI5ZWXDyf9a: 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:ZTI0MTIzZTVhZjdmZjFkY2U0MjllZmFiOWQ5ZWI5ZWXDyf9a: 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: ]] 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:MWQ3MWVkZDEwNjhhYjllOGIwNTQ0Y2I4NzVmOWFlYTBjMTE1MDRiNDFkMWI1MzJkMjFkNGEyZGM1Nzg4YzQyMK1WwoQ=: 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 0 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.890 01:08:14 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:02.823 nvme0n1 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: ]] 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 1 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.823 01:08:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:03.756 nvme0n1 00:21:03.756 01:08:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.756 01:08:15 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:03.756 01:08:15 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:03.756 01:08:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.756 01:08:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:03.756 01:08:15 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:YjEyMjg2MDg0YWI3Y2MzZGExZDAwOTk2ZjY4ZDNjMDAinTfj: 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:YjEyMjg2MDg0YWI3Y2MzZGExZDAwOTk2ZjY4ZDNjMDAinTfj: 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: ]] 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:NjA0M2VjODhlNjE5NjM0NjZjMDczZGM0ZDBiOTBhZDSbKGyo: 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 2 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.756 01:08:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:04.690 nvme0n1 00:21:04.690 01:08:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.690 01:08:16 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.690 01:08:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.690 01:08:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:04.690 01:08:16 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:04.690 01:08:16 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.690 01:08:17 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.690 01:08:17 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.690 01:08:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.690 01:08:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:04.690 01:08:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.690 01:08:17 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:04.690 01:08:17 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:21:04.690 01:08:17 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.690 01:08:17 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:21:04.690 01:08:17 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:04.690 01:08:17 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:21:04.690 01:08:17 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MWU5NmRlYTU4Y2UwM2RkYTg1ZGY2M2I2NjViNGY3YzI2YWUyZDJkZjhiZTE0NDM556km/g==: 00:21:04.690 01:08:17 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: 00:21:04.690 01:08:17 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:04.691 01:08:17 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:21:04.691 01:08:17 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MWU5NmRlYTU4Y2UwM2RkYTg1ZGY2M2I2NjViNGY3YzI2YWUyZDJkZjhiZTE0NDM556km/g==: 00:21:04.691 01:08:17 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: ]] 00:21:04.691 01:08:17 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:YTkwMmRhZjQ4MzFlYTU4NGY2Y2YwMjRjNTUxNTM5YmH2oWZk: 00:21:04.691 01:08:17 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 3 00:21:04.691 01:08:17 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:04.691 01:08:17 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:21:04.691 01:08:17 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:21:04.691 01:08:17 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:21:04.691 01:08:17 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.691 01:08:17 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:04.691 01:08:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.691 01:08:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:04.691 01:08:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.691 01:08:17 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:04.691 01:08:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:04.691 01:08:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:04.691 01:08:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:04.691 01:08:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.691 01:08:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.691 01:08:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:04.691 01:08:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:04.691 01:08:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:04.691 01:08:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:04.691 01:08:17 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:04.691 01:08:17 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:04.691 01:08:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.691 01:08:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:05.625 nvme0n1 00:21:05.625 01:08:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.625 01:08:17 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.625 01:08:17 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:05.625 01:08:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.625 01:08:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:05.625 01:08:17 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.625 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.625 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.625 01:08:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.625 01:08:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:05.625 01:08:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.625 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:21:05.625 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:21:05.625 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.625 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:21:05.625 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:05.625 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:21:05.625 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:MTY5YmU0MTFiZmExM2EzNWE2YjUxMzNkZWVhNzIwYjQ0NmRmMjFhYTJkMzQzODNkNjdmNTFiZTI5ZWNjN2M5ZGOp3HA=: 00:21:05.625 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:21:05.625 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:05.625 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:21:05.625 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:MTY5YmU0MTFiZmExM2EzNWE2YjUxMzNkZWVhNzIwYjQ0NmRmMjFhYTJkMzQzODNkNjdmNTFiZTI5ZWNjN2M5ZGOp3HA=: 00:21:05.884 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:05.884 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 4 00:21:05.884 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:21:05.884 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:21:05.884 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:21:05.884 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:21:05.884 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.884 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:05.884 01:08:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.884 01:08:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:05.884 01:08:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.884 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:21:05.884 01:08:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:05.884 01:08:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:05.884 01:08:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:05.884 01:08:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.884 01:08:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.884 01:08:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:05.884 01:08:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.884 01:08:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:05.884 01:08:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:05.884 01:08:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:05.884 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:05.884 01:08:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.884 01:08:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:06.820 nvme0n1 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@123 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:YTNjZWNmMTY4MmM5MmRhMWJmNGM0NjhjOGQyODBkMjcxYjBlOTU1NTA2MTkxYzdiW44MJQ==: 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: ]] 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:YzI2NmQ0YjgyMzQ3YzkyZTdmOTUwMDYxMTE1ZDkxOGRiODUxNWQwOGJhNTFhYjFlC6UvIg==: 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@124 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@125 -- # get_main_ns_ip 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- host/auth.sh@125 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:06.820 01:08:18 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:06.820 request: 00:21:06.820 { 00:21:06.820 "name": "nvme0", 00:21:06.820 "trtype": "tcp", 00:21:06.820 "traddr": "10.0.0.1", 00:21:06.820 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:06.820 "adrfam": "ipv4", 00:21:06.820 "trsvcid": "4420", 00:21:06.820 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:06.820 "method": "bdev_nvme_attach_controller", 00:21:06.820 "req_id": 1 00:21:06.820 } 00:21:06.820 Got JSON-RPC error response 00:21:06.820 response: 00:21:06.820 { 00:21:06.820 "code": -32602, 00:21:06.820 "message": "Invalid parameters" 00:21:06.820 } 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # jq length 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # (( 0 == 0 )) 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- host/auth.sh@130 -- # get_main_ns_ip 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- host/auth.sh@130 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.820 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:06.820 request: 00:21:06.820 { 00:21:06.820 "name": "nvme0", 00:21:06.820 "trtype": "tcp", 00:21:06.820 "traddr": "10.0.0.1", 00:21:06.820 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:06.820 "adrfam": "ipv4", 00:21:06.820 "trsvcid": "4420", 00:21:06.820 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:06.820 "dhchap_key": "key2", 00:21:06.820 "method": "bdev_nvme_attach_controller", 00:21:06.820 "req_id": 1 00:21:06.820 } 00:21:06.820 Got JSON-RPC error response 00:21:06.821 response: 00:21:06.821 { 00:21:06.821 "code": -32602, 00:21:06.821 "message": "Invalid parameters" 00:21:06.821 } 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # jq length 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # (( 0 == 0 )) 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- host/auth.sh@136 -- # get_main_ns_ip 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:06.821 request: 00:21:06.821 { 00:21:06.821 "name": "nvme0", 00:21:06.821 "trtype": "tcp", 00:21:06.821 "traddr": "10.0.0.1", 00:21:06.821 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:06.821 "adrfam": "ipv4", 00:21:06.821 "trsvcid": "4420", 00:21:06.821 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:06.821 "dhchap_key": "key1", 00:21:06.821 "dhchap_ctrlr_key": "ckey2", 00:21:06.821 "method": "bdev_nvme_attach_controller", 00:21:06.821 "req_id": 1 00:21:06.821 } 00:21:06.821 Got JSON-RPC error response 00:21:06.821 response: 00:21:06.821 { 00:21:06.821 "code": -32602, 00:21:06.821 "message": "Invalid parameters" 00:21:06.821 } 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- host/auth.sh@140 -- # trap - SIGINT SIGTERM EXIT 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- host/auth.sh@141 -- # cleanup 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- host/auth.sh@24 -- # nvmftestfini 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@117 -- # sync 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@120 -- # set +e 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:06.821 rmmod nvme_tcp 00:21:06.821 rmmod nvme_fabrics 00:21:06.821 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:07.079 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@124 -- # set -e 00:21:07.079 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@125 -- # return 0 00:21:07.079 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@489 -- # '[' -n 1326274 ']' 00:21:07.079 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@490 -- # killprocess 1326274 00:21:07.079 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@946 -- # '[' -z 1326274 ']' 00:21:07.079 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@950 -- # kill -0 1326274 00:21:07.079 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@951 -- # uname 00:21:07.079 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:07.079 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1326274 00:21:07.079 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:07.079 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:07.079 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1326274' 00:21:07.079 killing process with pid 1326274 00:21:07.079 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@965 -- # kill 1326274 00:21:07.079 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@970 -- # wait 1326274 00:21:07.338 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:07.338 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:07.338 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:07.338 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:07.338 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:07.338 01:08:19 nvmf_tcp.nvmf_auth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.338 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:07.338 01:08:19 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.269 01:08:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:09.269 01:08:21 nvmf_tcp.nvmf_auth -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:09.269 01:08:21 nvmf_tcp.nvmf_auth -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:09.269 01:08:21 nvmf_tcp.nvmf_auth -- host/auth.sh@27 -- # clean_kernel_target 00:21:09.269 01:08:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:21:09.269 01:08:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@686 -- # echo 0 00:21:09.269 01:08:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:09.269 01:08:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:09.269 01:08:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:09.269 01:08:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:09.269 01:08:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:21:09.269 01:08:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:21:09.269 01:08:21 nvmf_tcp.nvmf_auth -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:21:10.642 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:21:10.642 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:21:10.642 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:21:10.642 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:21:10.642 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:21:10.642 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:21:10.642 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:21:10.642 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:21:10.642 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:21:10.642 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:21:10.642 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:21:10.642 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:21:10.642 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:21:10.642 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:21:10.642 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:21:10.642 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:21:12.017 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:21:12.017 01:08:24 nvmf_tcp.nvmf_auth -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.9Jb /tmp/spdk.key-null.fop /tmp/spdk.key-sha256.Bn1 /tmp/spdk.key-sha384.SYe /tmp/spdk.key-sha512.qzJ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:21:12.017 01:08:24 nvmf_tcp.nvmf_auth -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:21:13.393 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:21:13.393 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:21:13.393 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:21:13.393 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:21:13.393 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:21:13.393 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:21:13.393 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:21:13.393 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:21:13.393 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:21:13.393 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:21:13.393 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:21:13.393 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:21:13.393 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:21:13.393 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:21:13.393 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:21:13.393 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:21:13.393 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:21:13.393 00:21:13.393 real 0m48.198s 00:21:13.393 user 0m45.580s 00:21:13.393 sys 0m6.411s 00:21:13.393 01:08:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:13.393 01:08:25 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:21:13.393 ************************************ 00:21:13.393 END TEST nvmf_auth 00:21:13.393 ************************************ 00:21:13.394 01:08:25 nvmf_tcp -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:21:13.394 01:08:25 nvmf_tcp -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:13.394 01:08:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:13.394 01:08:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:13.394 01:08:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:13.394 ************************************ 00:21:13.394 START TEST nvmf_digest 00:21:13.394 ************************************ 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:13.394 * Looking for test storage... 00:21:13.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:21:13.394 01:08:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:15.927 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:15.927 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:15.927 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:15.928 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:15.928 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:15.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:15.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:21:15.928 00:21:15.928 --- 10.0.0.2 ping statistics --- 00:21:15.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.928 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:15.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:15.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:21:15.928 00:21:15.928 --- 10.0.0.1 ping statistics --- 00:21:15.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.928 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:15.928 ************************************ 00:21:15.928 START TEST nvmf_digest_clean 00:21:15.928 ************************************ 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1336057 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1336057 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1336057 ']' 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:15.928 01:08:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:16.187 [2024-05-15 01:08:28.352748] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:21:16.187 [2024-05-15 01:08:28.352833] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.187 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.187 [2024-05-15 01:08:28.434589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.187 [2024-05-15 01:08:28.555204] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.187 [2024-05-15 01:08:28.555264] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.187 [2024-05-15 01:08:28.555281] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.187 [2024-05-15 01:08:28.555295] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.187 [2024-05-15 01:08:28.555306] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.187 [2024-05-15 01:08:28.555335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.120 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:17.120 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:21:17.120 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:17.120 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:17.120 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:17.120 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.120 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:21:17.120 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:21:17.120 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:21:17.120 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.120 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:17.120 null0 00:21:17.120 [2024-05-15 01:08:29.429784] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.120 [2024-05-15 01:08:29.453759] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:17.120 [2024-05-15 01:08:29.454034] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.120 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.120 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:21:17.120 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:17.120 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:17.120 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:17.120 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:17.120 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:17.120 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:17.120 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1336209 00:21:17.120 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:17.121 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1336209 /var/tmp/bperf.sock 00:21:17.121 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1336209 ']' 00:21:17.121 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:17.121 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:17.121 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:17.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:17.121 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:17.121 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:17.121 [2024-05-15 01:08:29.499278] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:21:17.121 [2024-05-15 01:08:29.499370] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1336209 ] 00:21:17.380 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.380 [2024-05-15 01:08:29.569196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.380 [2024-05-15 01:08:29.682079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.380 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:17.380 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:21:17.380 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:17.380 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:17.380 01:08:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:17.951 01:08:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:17.951 01:08:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:18.247 nvme0n1 00:21:18.247 01:08:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:18.247 01:08:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:18.247 Running I/O for 2 seconds... 00:21:20.779 00:21:20.779 Latency(us) 00:21:20.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.779 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:20.779 nvme0n1 : 2.00 18675.34 72.95 0.00 0.00 6844.71 3592.34 19320.98 00:21:20.779 =================================================================================================================== 00:21:20.779 Total : 18675.34 72.95 0.00 0.00 6844.71 3592.34 19320.98 00:21:20.779 0 00:21:20.779 01:08:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:20.779 01:08:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:20.779 01:08:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:20.779 01:08:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:20.779 01:08:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:20.779 | select(.opcode=="crc32c") 00:21:20.779 | "\(.module_name) \(.executed)"' 00:21:20.779 01:08:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:20.779 01:08:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:20.779 01:08:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:20.779 01:08:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:20.779 01:08:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1336209 00:21:20.779 01:08:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1336209 ']' 00:21:20.779 01:08:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1336209 00:21:20.779 01:08:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:21:20.779 01:08:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:20.779 01:08:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1336209 00:21:20.779 01:08:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:20.779 01:08:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:20.780 01:08:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1336209' 00:21:20.780 killing process with pid 1336209 00:21:20.780 01:08:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1336209 00:21:20.780 Received shutdown signal, test time was about 2.000000 seconds 00:21:20.780 00:21:20.780 Latency(us) 00:21:20.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.780 =================================================================================================================== 00:21:20.780 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:20.780 01:08:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1336209 00:21:21.038 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:21:21.038 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:21.038 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:21.038 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:21.038 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:21.038 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:21.038 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:21.038 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1336739 00:21:21.038 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1336739 /var/tmp/bperf.sock 00:21:21.038 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:21.038 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1336739 ']' 00:21:21.038 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:21.038 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:21.038 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:21.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:21.038 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:21.038 01:08:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:21.038 [2024-05-15 01:08:33.219681] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:21:21.038 [2024-05-15 01:08:33.219771] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1336739 ] 00:21:21.038 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:21.038 Zero copy mechanism will not be used. 00:21:21.038 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.038 [2024-05-15 01:08:33.292772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.038 [2024-05-15 01:08:33.410375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.979 01:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:21.979 01:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:21:21.979 01:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:21.979 01:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:21.979 01:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:22.237 01:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:22.237 01:08:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:22.802 nvme0n1 00:21:22.802 01:08:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:22.802 01:08:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:22.802 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:22.802 Zero copy mechanism will not be used. 00:21:22.802 Running I/O for 2 seconds... 00:21:25.358 00:21:25.358 Latency(us) 00:21:25.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.358 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:25.358 nvme0n1 : 2.01 2089.87 261.23 0.00 0.00 7650.86 7330.32 15437.37 00:21:25.358 =================================================================================================================== 00:21:25.358 Total : 2089.87 261.23 0.00 0.00 7650.86 7330.32 15437.37 00:21:25.358 0 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:25.358 | select(.opcode=="crc32c") 00:21:25.358 | "\(.module_name) \(.executed)"' 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1336739 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1336739 ']' 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1336739 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1336739 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1336739' 00:21:25.358 killing process with pid 1336739 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1336739 00:21:25.358 Received shutdown signal, test time was about 2.000000 seconds 00:21:25.358 00:21:25.358 Latency(us) 00:21:25.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.358 =================================================================================================================== 00:21:25.358 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1336739 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1337282 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1337282 /var/tmp/bperf.sock 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1337282 ']' 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:25.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:25.358 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:25.617 [2024-05-15 01:08:37.752504] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:21:25.617 [2024-05-15 01:08:37.752596] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337282 ] 00:21:25.617 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.617 [2024-05-15 01:08:37.821488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.617 [2024-05-15 01:08:37.929201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.617 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:25.617 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:21:25.617 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:25.617 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:25.617 01:08:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:26.182 01:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:26.182 01:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:26.440 nvme0n1 00:21:26.440 01:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:26.440 01:08:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:26.440 Running I/O for 2 seconds... 00:21:28.967 00:21:28.967 Latency(us) 00:21:28.967 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.967 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:28.967 nvme0n1 : 2.00 15286.42 59.71 0.00 0.00 8362.57 7378.87 23787.14 00:21:28.967 =================================================================================================================== 00:21:28.967 Total : 15286.42 59.71 0.00 0.00 8362.57 7378.87 23787.14 00:21:28.967 0 00:21:28.967 01:08:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:28.967 01:08:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:28.967 01:08:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:28.967 01:08:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:28.967 | select(.opcode=="crc32c") 00:21:28.967 | "\(.module_name) \(.executed)"' 00:21:28.967 01:08:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:28.967 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:28.967 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:28.967 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:28.967 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:28.967 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1337282 00:21:28.967 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1337282 ']' 00:21:28.967 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1337282 00:21:28.967 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:21:28.967 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:28.967 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1337282 00:21:28.967 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:28.967 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:28.967 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1337282' 00:21:28.967 killing process with pid 1337282 00:21:28.967 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1337282 00:21:28.967 Received shutdown signal, test time was about 2.000000 seconds 00:21:28.967 00:21:28.967 Latency(us) 00:21:28.967 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.967 =================================================================================================================== 00:21:28.967 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:28.967 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1337282 00:21:29.226 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:21:29.226 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:29.226 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:29.226 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:29.226 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:29.226 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:29.226 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:29.226 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1337687 00:21:29.226 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1337687 /var/tmp/bperf.sock 00:21:29.226 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:29.226 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1337687 ']' 00:21:29.226 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:29.226 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:29.226 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:29.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:29.226 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:29.226 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:29.226 [2024-05-15 01:08:41.410531] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:21:29.226 [2024-05-15 01:08:41.410622] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337687 ] 00:21:29.226 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:29.226 Zero copy mechanism will not be used. 00:21:29.226 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.226 [2024-05-15 01:08:41.479001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.227 [2024-05-15 01:08:41.587054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.485 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:29.485 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:21:29.485 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:29.485 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:29.485 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:29.743 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:29.743 01:08:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:30.000 nvme0n1 00:21:30.000 01:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:30.001 01:08:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:30.001 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:30.001 Zero copy mechanism will not be used. 00:21:30.001 Running I/O for 2 seconds... 00:21:32.526 00:21:32.526 Latency(us) 00:21:32.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.526 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:32.526 nvme0n1 : 2.01 1473.73 184.22 0.00 0.00 10825.48 6407.96 17767.54 00:21:32.526 =================================================================================================================== 00:21:32.526 Total : 1473.73 184.22 0.00 0.00 10825.48 6407.96 17767.54 00:21:32.526 0 00:21:32.526 01:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:32.526 01:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:32.526 01:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:32.526 01:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:32.526 | select(.opcode=="crc32c") 00:21:32.526 | "\(.module_name) \(.executed)"' 00:21:32.526 01:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:32.526 01:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:32.526 01:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:32.526 01:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:32.526 01:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:32.526 01:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1337687 00:21:32.526 01:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1337687 ']' 00:21:32.526 01:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1337687 00:21:32.526 01:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:21:32.526 01:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:32.526 01:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1337687 00:21:32.526 01:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:32.526 01:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:32.526 01:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1337687' 00:21:32.526 killing process with pid 1337687 00:21:32.526 01:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1337687 00:21:32.526 Received shutdown signal, test time was about 2.000000 seconds 00:21:32.526 00:21:32.526 Latency(us) 00:21:32.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.526 =================================================================================================================== 00:21:32.526 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:32.526 01:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1337687 00:21:32.784 01:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1336057 00:21:32.784 01:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1336057 ']' 00:21:32.784 01:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1336057 00:21:32.784 01:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:21:32.784 01:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:32.784 01:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1336057 00:21:32.784 01:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:32.784 01:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:32.784 01:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1336057' 00:21:32.784 killing process with pid 1336057 00:21:32.784 01:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1336057 00:21:32.784 [2024-05-15 01:08:44.990629] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:32.784 01:08:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1336057 00:21:33.042 00:21:33.042 real 0m16.962s 00:21:33.042 user 0m32.802s 00:21:33.042 sys 0m4.237s 00:21:33.042 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:33.042 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:33.042 ************************************ 00:21:33.042 END TEST nvmf_digest_clean 00:21:33.042 ************************************ 00:21:33.042 01:08:45 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:21:33.042 01:08:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:21:33.042 01:08:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:33.042 01:08:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:33.042 ************************************ 00:21:33.042 START TEST nvmf_digest_error 00:21:33.042 ************************************ 00:21:33.042 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:21:33.042 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:21:33.042 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:33.042 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:33.042 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:33.042 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1338122 00:21:33.042 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:33.042 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1338122 00:21:33.042 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1338122 ']' 00:21:33.042 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.042 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:33.042 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.042 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:33.042 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:33.042 [2024-05-15 01:08:45.375274] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:21:33.042 [2024-05-15 01:08:45.375366] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.042 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.301 [2024-05-15 01:08:45.449773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.301 [2024-05-15 01:08:45.560770] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.301 [2024-05-15 01:08:45.560821] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.301 [2024-05-15 01:08:45.560844] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:33.301 [2024-05-15 01:08:45.560855] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:33.301 [2024-05-15 01:08:45.560865] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.301 [2024-05-15 01:08:45.560901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.301 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:33.301 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:21:33.301 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:33.301 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:33.301 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:33.301 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:33.301 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:21:33.301 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.301 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:33.301 [2024-05-15 01:08:45.613445] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:21:33.301 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.301 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:21:33.301 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:21:33.301 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.301 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:33.559 null0 00:21:33.559 [2024-05-15 01:08:45.729097] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:33.559 [2024-05-15 01:08:45.753087] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:33.559 [2024-05-15 01:08:45.753348] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:33.559 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.559 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:21:33.559 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:33.559 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:33.559 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:33.559 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:33.559 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1338264 00:21:33.559 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1338264 /var/tmp/bperf.sock 00:21:33.559 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1338264 ']' 00:21:33.559 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:33.559 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:21:33.559 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:33.559 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:33.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:33.559 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:33.559 01:08:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:33.559 [2024-05-15 01:08:45.798843] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:21:33.559 [2024-05-15 01:08:45.798941] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1338264 ] 00:21:33.559 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.559 [2024-05-15 01:08:45.874795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.818 [2024-05-15 01:08:45.991182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.384 01:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:34.384 01:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:21:34.384 01:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:34.384 01:08:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:34.642 01:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:34.642 01:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.642 01:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:34.642 01:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.642 01:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:34.642 01:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:35.245 nvme0n1 00:21:35.245 01:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:35.245 01:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.245 01:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:35.245 01:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.245 01:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:35.245 01:08:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:35.245 Running I/O for 2 seconds... 00:21:35.245 [2024-05-15 01:08:47.465129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.245 [2024-05-15 01:08:47.465181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.245 [2024-05-15 01:08:47.465201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.245 [2024-05-15 01:08:47.479407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.245 [2024-05-15 01:08:47.479439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.245 [2024-05-15 01:08:47.479466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.245 [2024-05-15 01:08:47.492087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.245 [2024-05-15 01:08:47.492118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.245 [2024-05-15 01:08:47.492135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.245 [2024-05-15 01:08:47.504645] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.245 [2024-05-15 01:08:47.504677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.245 [2024-05-15 01:08:47.504693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.245 [2024-05-15 01:08:47.518464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.245 [2024-05-15 01:08:47.518494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.246 [2024-05-15 01:08:47.518510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.246 [2024-05-15 01:08:47.530087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.246 [2024-05-15 01:08:47.530119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.246 [2024-05-15 01:08:47.530136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.246 [2024-05-15 01:08:47.544388] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.246 [2024-05-15 01:08:47.544418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.246 [2024-05-15 01:08:47.544434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.246 [2024-05-15 01:08:47.556143] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.246 [2024-05-15 01:08:47.556174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.246 [2024-05-15 01:08:47.556191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.246 [2024-05-15 01:08:47.569805] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.246 [2024-05-15 01:08:47.569851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.246 [2024-05-15 01:08:47.569868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.246 [2024-05-15 01:08:47.582606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.246 [2024-05-15 01:08:47.582636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.246 [2024-05-15 01:08:47.582652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.246 [2024-05-15 01:08:47.595827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.246 [2024-05-15 01:08:47.595861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.246 [2024-05-15 01:08:47.595877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.246 [2024-05-15 01:08:47.607001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.246 [2024-05-15 01:08:47.607031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.246 [2024-05-15 01:08:47.607048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.505 [2024-05-15 01:08:47.622322] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.505 [2024-05-15 01:08:47.622353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.505 [2024-05-15 01:08:47.622369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.505 [2024-05-15 01:08:47.634373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.505 [2024-05-15 01:08:47.634403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.505 [2024-05-15 01:08:47.634419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.505 [2024-05-15 01:08:47.646558] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.505 [2024-05-15 01:08:47.646588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.505 [2024-05-15 01:08:47.646603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.505 [2024-05-15 01:08:47.660880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.505 [2024-05-15 01:08:47.660926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.505 [2024-05-15 01:08:47.660952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.505 [2024-05-15 01:08:47.673709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.505 [2024-05-15 01:08:47.673739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.505 [2024-05-15 01:08:47.673756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.505 [2024-05-15 01:08:47.686046] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.505 [2024-05-15 01:08:47.686079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.505 [2024-05-15 01:08:47.686096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.505 [2024-05-15 01:08:47.699874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.505 [2024-05-15 01:08:47.699920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.505 [2024-05-15 01:08:47.699946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.505 [2024-05-15 01:08:47.711950] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.505 [2024-05-15 01:08:47.711998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.505 [2024-05-15 01:08:47.712018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.505 [2024-05-15 01:08:47.725162] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.505 [2024-05-15 01:08:47.725219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.505 [2024-05-15 01:08:47.725237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.505 [2024-05-15 01:08:47.737717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.505 [2024-05-15 01:08:47.737748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.505 [2024-05-15 01:08:47.737764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.505 [2024-05-15 01:08:47.751048] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.505 [2024-05-15 01:08:47.751079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.505 [2024-05-15 01:08:47.751096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.505 [2024-05-15 01:08:47.764032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.505 [2024-05-15 01:08:47.764063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.505 [2024-05-15 01:08:47.764080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.505 [2024-05-15 01:08:47.777820] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.505 [2024-05-15 01:08:47.777853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.505 [2024-05-15 01:08:47.777870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.506 [2024-05-15 01:08:47.790841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.506 [2024-05-15 01:08:47.790875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.506 [2024-05-15 01:08:47.790893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.506 [2024-05-15 01:08:47.804173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.506 [2024-05-15 01:08:47.804219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.506 [2024-05-15 01:08:47.804236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.506 [2024-05-15 01:08:47.817345] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.506 [2024-05-15 01:08:47.817395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.506 [2024-05-15 01:08:47.817422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.506 [2024-05-15 01:08:47.831381] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.506 [2024-05-15 01:08:47.831417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.506 [2024-05-15 01:08:47.831436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.506 [2024-05-15 01:08:47.845043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.506 [2024-05-15 01:08:47.845075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.506 [2024-05-15 01:08:47.845091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.506 [2024-05-15 01:08:47.856940] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.506 [2024-05-15 01:08:47.856997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.506 [2024-05-15 01:08:47.857014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.506 [2024-05-15 01:08:47.871563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.506 [2024-05-15 01:08:47.871598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.506 [2024-05-15 01:08:47.871618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.506 [2024-05-15 01:08:47.885744] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.506 [2024-05-15 01:08:47.885778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.506 [2024-05-15 01:08:47.885797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.766 [2024-05-15 01:08:47.899557] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.766 [2024-05-15 01:08:47.899591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.766 [2024-05-15 01:08:47.899610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.766 [2024-05-15 01:08:47.913919] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.766 [2024-05-15 01:08:47.913978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.766 [2024-05-15 01:08:47.913997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.766 [2024-05-15 01:08:47.927795] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.766 [2024-05-15 01:08:47.927829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.766 [2024-05-15 01:08:47.927848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.766 [2024-05-15 01:08:47.941034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.766 [2024-05-15 01:08:47.941070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.766 [2024-05-15 01:08:47.941087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.766 [2024-05-15 01:08:47.956431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.766 [2024-05-15 01:08:47.956466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.766 [2024-05-15 01:08:47.956485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.766 [2024-05-15 01:08:47.968581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.766 [2024-05-15 01:08:47.968616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.766 [2024-05-15 01:08:47.968634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.766 [2024-05-15 01:08:47.983298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.766 [2024-05-15 01:08:47.983332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.766 [2024-05-15 01:08:47.983351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.766 [2024-05-15 01:08:47.998134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.766 [2024-05-15 01:08:47.998165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.766 [2024-05-15 01:08:47.998183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.766 [2024-05-15 01:08:48.011007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.766 [2024-05-15 01:08:48.011039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.766 [2024-05-15 01:08:48.011056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.766 [2024-05-15 01:08:48.024513] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.766 [2024-05-15 01:08:48.024547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.766 [2024-05-15 01:08:48.024566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.766 [2024-05-15 01:08:48.040212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.766 [2024-05-15 01:08:48.040243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.766 [2024-05-15 01:08:48.040277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.766 [2024-05-15 01:08:48.052832] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.766 [2024-05-15 01:08:48.052866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.766 [2024-05-15 01:08:48.052890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.766 [2024-05-15 01:08:48.066809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.766 [2024-05-15 01:08:48.066843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.766 [2024-05-15 01:08:48.066862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.766 [2024-05-15 01:08:48.080580] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.766 [2024-05-15 01:08:48.080614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.766 [2024-05-15 01:08:48.080633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.766 [2024-05-15 01:08:48.095796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.766 [2024-05-15 01:08:48.095830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.766 [2024-05-15 01:08:48.095849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.766 [2024-05-15 01:08:48.109370] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.766 [2024-05-15 01:08:48.109405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.766 [2024-05-15 01:08:48.109424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.766 [2024-05-15 01:08:48.122477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.767 [2024-05-15 01:08:48.122512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.767 [2024-05-15 01:08:48.122530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.767 [2024-05-15 01:08:48.137219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.767 [2024-05-15 01:08:48.137254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.767 [2024-05-15 01:08:48.137273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.767 [2024-05-15 01:08:48.152241] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:35.767 [2024-05-15 01:08:48.152290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.767 [2024-05-15 01:08:48.152308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.027 [2024-05-15 01:08:48.165674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.027 [2024-05-15 01:08:48.165709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.027 [2024-05-15 01:08:48.165728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.027 [2024-05-15 01:08:48.179851] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.027 [2024-05-15 01:08:48.179891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.027 [2024-05-15 01:08:48.179911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.027 [2024-05-15 01:08:48.192975] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.027 [2024-05-15 01:08:48.193006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.027 [2024-05-15 01:08:48.193023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.027 [2024-05-15 01:08:48.207919] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.027 [2024-05-15 01:08:48.207977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.027 [2024-05-15 01:08:48.207996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.027 [2024-05-15 01:08:48.220898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.027 [2024-05-15 01:08:48.220939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.027 [2024-05-15 01:08:48.220960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.027 [2024-05-15 01:08:48.235034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.027 [2024-05-15 01:08:48.235065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.027 [2024-05-15 01:08:48.235082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.027 [2024-05-15 01:08:48.248804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.027 [2024-05-15 01:08:48.248838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.027 [2024-05-15 01:08:48.248856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.027 [2024-05-15 01:08:48.263908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.027 [2024-05-15 01:08:48.263949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.027 [2024-05-15 01:08:48.263984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.027 [2024-05-15 01:08:48.276380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.027 [2024-05-15 01:08:48.276414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.027 [2024-05-15 01:08:48.276433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.027 [2024-05-15 01:08:48.290104] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.027 [2024-05-15 01:08:48.290133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.027 [2024-05-15 01:08:48.290164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.027 [2024-05-15 01:08:48.304200] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.027 [2024-05-15 01:08:48.304248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.027 [2024-05-15 01:08:48.304265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.027 [2024-05-15 01:08:48.319051] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.027 [2024-05-15 01:08:48.319082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.027 [2024-05-15 01:08:48.319100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.027 [2024-05-15 01:08:48.332384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.027 [2024-05-15 01:08:48.332430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.027 [2024-05-15 01:08:48.332448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.028 [2024-05-15 01:08:48.345167] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.028 [2024-05-15 01:08:48.345199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.028 [2024-05-15 01:08:48.345233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.028 [2024-05-15 01:08:48.360257] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.028 [2024-05-15 01:08:48.360304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.028 [2024-05-15 01:08:48.360323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.028 [2024-05-15 01:08:48.375567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.028 [2024-05-15 01:08:48.375601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.028 [2024-05-15 01:08:48.375620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.028 [2024-05-15 01:08:48.388329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.028 [2024-05-15 01:08:48.388364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.028 [2024-05-15 01:08:48.388384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.028 [2024-05-15 01:08:48.403223] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.028 [2024-05-15 01:08:48.403272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.028 [2024-05-15 01:08:48.403291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.028 [2024-05-15 01:08:48.417466] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.028 [2024-05-15 01:08:48.417500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.028 [2024-05-15 01:08:48.417525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.288 [2024-05-15 01:08:48.431346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.288 [2024-05-15 01:08:48.431382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.288 [2024-05-15 01:08:48.431401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.288 [2024-05-15 01:08:48.444655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.288 [2024-05-15 01:08:48.444689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.288 [2024-05-15 01:08:48.444709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.288 [2024-05-15 01:08:48.459780] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.288 [2024-05-15 01:08:48.459814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.288 [2024-05-15 01:08:48.459833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.288 [2024-05-15 01:08:48.472304] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.288 [2024-05-15 01:08:48.472338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.288 [2024-05-15 01:08:48.472357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.288 [2024-05-15 01:08:48.484982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.288 [2024-05-15 01:08:48.485011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.288 [2024-05-15 01:08:48.485027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.288 [2024-05-15 01:08:48.498606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.288 [2024-05-15 01:08:48.498640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.288 [2024-05-15 01:08:48.498659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.288 [2024-05-15 01:08:48.513794] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.288 [2024-05-15 01:08:48.513828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.288 [2024-05-15 01:08:48.513846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.288 [2024-05-15 01:08:48.527624] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.288 [2024-05-15 01:08:48.527658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.288 [2024-05-15 01:08:48.527677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.288 [2024-05-15 01:08:48.541883] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.288 [2024-05-15 01:08:48.541923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.288 [2024-05-15 01:08:48.541955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.288 [2024-05-15 01:08:48.555779] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.288 [2024-05-15 01:08:48.555813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.288 [2024-05-15 01:08:48.555832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.288 [2024-05-15 01:08:48.569917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.288 [2024-05-15 01:08:48.569973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.288 [2024-05-15 01:08:48.569992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.288 [2024-05-15 01:08:48.583364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.288 [2024-05-15 01:08:48.583398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.288 [2024-05-15 01:08:48.583417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.288 [2024-05-15 01:08:48.598171] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.288 [2024-05-15 01:08:48.598201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.288 [2024-05-15 01:08:48.598233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.288 [2024-05-15 01:08:48.610434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.288 [2024-05-15 01:08:48.610469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.288 [2024-05-15 01:08:48.610487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.288 [2024-05-15 01:08:48.625282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.289 [2024-05-15 01:08:48.625316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.289 [2024-05-15 01:08:48.625334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.289 [2024-05-15 01:08:48.638465] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.289 [2024-05-15 01:08:48.638499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.289 [2024-05-15 01:08:48.638517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.289 [2024-05-15 01:08:48.653126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.289 [2024-05-15 01:08:48.653156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.289 [2024-05-15 01:08:48.653172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.289 [2024-05-15 01:08:48.667786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.289 [2024-05-15 01:08:48.667820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.289 [2024-05-15 01:08:48.667839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.289 [2024-05-15 01:08:48.679827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.289 [2024-05-15 01:08:48.679862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.289 [2024-05-15 01:08:48.679880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.548 [2024-05-15 01:08:48.695456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.548 [2024-05-15 01:08:48.695492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.548 [2024-05-15 01:08:48.695511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.548 [2024-05-15 01:08:48.708644] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.548 [2024-05-15 01:08:48.708678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.548 [2024-05-15 01:08:48.708697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.548 [2024-05-15 01:08:48.723838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.548 [2024-05-15 01:08:48.723872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.548 [2024-05-15 01:08:48.723891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.548 [2024-05-15 01:08:48.735906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.548 [2024-05-15 01:08:48.735950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.548 [2024-05-15 01:08:48.735971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.548 [2024-05-15 01:08:48.749488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.548 [2024-05-15 01:08:48.749520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.548 [2024-05-15 01:08:48.749535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.548 [2024-05-15 01:08:48.762870] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.548 [2024-05-15 01:08:48.762900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.548 [2024-05-15 01:08:48.762916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.548 [2024-05-15 01:08:48.776597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.548 [2024-05-15 01:08:48.776632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.548 [2024-05-15 01:08:48.776648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.548 [2024-05-15 01:08:48.788832] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.548 [2024-05-15 01:08:48.788863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.548 [2024-05-15 01:08:48.788879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.548 [2024-05-15 01:08:48.801954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.548 [2024-05-15 01:08:48.801988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.548 [2024-05-15 01:08:48.802004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.548 [2024-05-15 01:08:48.815398] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.548 [2024-05-15 01:08:48.815441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.548 [2024-05-15 01:08:48.815457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.548 [2024-05-15 01:08:48.828860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.548 [2024-05-15 01:08:48.828889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.548 [2024-05-15 01:08:48.828919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.548 [2024-05-15 01:08:48.841575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.548 [2024-05-15 01:08:48.841621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.548 [2024-05-15 01:08:48.841638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.548 [2024-05-15 01:08:48.854310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.548 [2024-05-15 01:08:48.854341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.548 [2024-05-15 01:08:48.854358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.548 [2024-05-15 01:08:48.868271] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.548 [2024-05-15 01:08:48.868317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.548 [2024-05-15 01:08:48.868334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.548 [2024-05-15 01:08:48.881004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.548 [2024-05-15 01:08:48.881034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.548 [2024-05-15 01:08:48.881051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.548 [2024-05-15 01:08:48.893549] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.548 [2024-05-15 01:08:48.893588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.548 [2024-05-15 01:08:48.893605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.548 [2024-05-15 01:08:48.906376] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.548 [2024-05-15 01:08:48.906418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.548 [2024-05-15 01:08:48.906434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.548 [2024-05-15 01:08:48.918755] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.548 [2024-05-15 01:08:48.918787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.548 [2024-05-15 01:08:48.918803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.548 [2024-05-15 01:08:48.932686] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.548 [2024-05-15 01:08:48.932715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.548 [2024-05-15 01:08:48.932746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.809 [2024-05-15 01:08:48.946601] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.809 [2024-05-15 01:08:48.946631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.809 [2024-05-15 01:08:48.946648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.809 [2024-05-15 01:08:48.958855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.809 [2024-05-15 01:08:48.958899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.809 [2024-05-15 01:08:48.958915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.809 [2024-05-15 01:08:48.971753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.809 [2024-05-15 01:08:48.971782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.809 [2024-05-15 01:08:48.971798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.809 [2024-05-15 01:08:48.983881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.809 [2024-05-15 01:08:48.983926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.809 [2024-05-15 01:08:48.983954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.809 [2024-05-15 01:08:48.997435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.809 [2024-05-15 01:08:48.997465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.809 [2024-05-15 01:08:48.997489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.809 [2024-05-15 01:08:49.010394] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.809 [2024-05-15 01:08:49.010422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.809 [2024-05-15 01:08:49.010438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.809 [2024-05-15 01:08:49.023328] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.809 [2024-05-15 01:08:49.023372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.809 [2024-05-15 01:08:49.023388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.809 [2024-05-15 01:08:49.035308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.809 [2024-05-15 01:08:49.035338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.809 [2024-05-15 01:08:49.035353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.809 [2024-05-15 01:08:49.049001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.809 [2024-05-15 01:08:49.049031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.809 [2024-05-15 01:08:49.049047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.809 [2024-05-15 01:08:49.062498] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.809 [2024-05-15 01:08:49.062527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.809 [2024-05-15 01:08:49.062543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.809 [2024-05-15 01:08:49.075595] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.809 [2024-05-15 01:08:49.075623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.809 [2024-05-15 01:08:49.075654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.809 [2024-05-15 01:08:49.088858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.809 [2024-05-15 01:08:49.088887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.809 [2024-05-15 01:08:49.088917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.809 [2024-05-15 01:08:49.101380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.809 [2024-05-15 01:08:49.101410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.809 [2024-05-15 01:08:49.101426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.809 [2024-05-15 01:08:49.114129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.809 [2024-05-15 01:08:49.114165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.809 [2024-05-15 01:08:49.114183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.809 [2024-05-15 01:08:49.127177] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.809 [2024-05-15 01:08:49.127219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.809 [2024-05-15 01:08:49.127251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.809 [2024-05-15 01:08:49.140616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.809 [2024-05-15 01:08:49.140645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.809 [2024-05-15 01:08:49.140660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.809 [2024-05-15 01:08:49.153078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.809 [2024-05-15 01:08:49.153109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.809 [2024-05-15 01:08:49.153126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.809 [2024-05-15 01:08:49.165320] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.809 [2024-05-15 01:08:49.165349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.809 [2024-05-15 01:08:49.165365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.809 [2024-05-15 01:08:49.178780] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.809 [2024-05-15 01:08:49.178810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.809 [2024-05-15 01:08:49.178826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.809 [2024-05-15 01:08:49.191050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:36.809 [2024-05-15 01:08:49.191080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.809 [2024-05-15 01:08:49.191097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.070 [2024-05-15 01:08:49.204345] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:37.070 [2024-05-15 01:08:49.204390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.070 [2024-05-15 01:08:49.204408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.070 [2024-05-15 01:08:49.217810] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:37.070 [2024-05-15 01:08:49.217855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.070 [2024-05-15 01:08:49.217871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.070 [2024-05-15 01:08:49.231632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:37.070 [2024-05-15 01:08:49.231676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.070 [2024-05-15 01:08:49.231693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.070 [2024-05-15 01:08:49.243254] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:37.070 [2024-05-15 01:08:49.243284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.070 [2024-05-15 01:08:49.243300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.070 [2024-05-15 01:08:49.257464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:37.070 [2024-05-15 01:08:49.257495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.070 [2024-05-15 01:08:49.257510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.070 [2024-05-15 01:08:49.269496] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:37.070 [2024-05-15 01:08:49.269541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.070 [2024-05-15 01:08:49.269556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.070 [2024-05-15 01:08:49.283020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:37.070 [2024-05-15 01:08:49.283050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.070 [2024-05-15 01:08:49.283067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.070 [2024-05-15 01:08:49.296138] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:37.070 [2024-05-15 01:08:49.296168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.070 [2024-05-15 01:08:49.296185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.070 [2024-05-15 01:08:49.308588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:37.070 [2024-05-15 01:08:49.308618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.070 [2024-05-15 01:08:49.308634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.070 [2024-05-15 01:08:49.321539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:37.070 [2024-05-15 01:08:49.321568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.070 [2024-05-15 01:08:49.321584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.071 [2024-05-15 01:08:49.335330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:37.071 [2024-05-15 01:08:49.335360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.071 [2024-05-15 01:08:49.335382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.071 [2024-05-15 01:08:49.346603] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:37.071 [2024-05-15 01:08:49.346633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.071 [2024-05-15 01:08:49.346649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.071 [2024-05-15 01:08:49.360703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:37.071 [2024-05-15 01:08:49.360733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.071 [2024-05-15 01:08:49.360765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.071 [2024-05-15 01:08:49.373702] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:37.071 [2024-05-15 01:08:49.373733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.071 [2024-05-15 01:08:49.373750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.071 [2024-05-15 01:08:49.387326] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:37.071 [2024-05-15 01:08:49.387361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.071 [2024-05-15 01:08:49.387379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.071 [2024-05-15 01:08:49.401009] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:37.071 [2024-05-15 01:08:49.401039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.071 [2024-05-15 01:08:49.401056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.071 [2024-05-15 01:08:49.413012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:37.071 [2024-05-15 01:08:49.413041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.071 [2024-05-15 01:08:49.413058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.071 [2024-05-15 01:08:49.426147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:37.071 [2024-05-15 01:08:49.426177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.071 [2024-05-15 01:08:49.426194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.071 [2024-05-15 01:08:49.439290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:37.071 [2024-05-15 01:08:49.439319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.071 [2024-05-15 01:08:49.439335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.071 [2024-05-15 01:08:49.451429] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f2a950) 00:21:37.071 [2024-05-15 01:08:49.451458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.071 [2024-05-15 01:08:49.451475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.071 00:21:37.071 Latency(us) 00:21:37.071 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.071 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:37.071 nvme0n1 : 2.00 18996.77 74.21 0.00 0.00 6727.30 2973.39 17476.27 00:21:37.071 =================================================================================================================== 00:21:37.071 Total : 18996.77 74.21 0.00 0.00 6727.30 2973.39 17476.27 00:21:37.071 0 00:21:37.330 01:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:37.330 01:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:37.330 01:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:37.330 01:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:37.330 | .driver_specific 00:21:37.330 | .nvme_error 00:21:37.330 | .status_code 00:21:37.330 | .command_transient_transport_error' 00:21:37.330 01:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 149 > 0 )) 00:21:37.330 01:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1338264 00:21:37.330 01:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1338264 ']' 00:21:37.330 01:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1338264 00:21:37.330 01:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:21:37.330 01:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:37.330 01:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1338264 00:21:37.590 01:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:37.590 01:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:37.590 01:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1338264' 00:21:37.590 killing process with pid 1338264 00:21:37.590 01:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1338264 00:21:37.590 Received shutdown signal, test time was about 2.000000 seconds 00:21:37.590 00:21:37.590 Latency(us) 00:21:37.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.590 =================================================================================================================== 00:21:37.590 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:37.590 01:08:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1338264 00:21:37.849 01:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:21:37.849 01:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:37.849 01:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:37.849 01:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:37.849 01:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:37.849 01:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1338689 00:21:37.849 01:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:21:37.849 01:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1338689 /var/tmp/bperf.sock 00:21:37.849 01:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1338689 ']' 00:21:37.849 01:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:37.849 01:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:37.849 01:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:37.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:37.849 01:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:37.849 01:08:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:37.849 [2024-05-15 01:08:50.064134] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:21:37.849 [2024-05-15 01:08:50.064239] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1338689 ] 00:21:37.849 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:37.849 Zero copy mechanism will not be used. 00:21:37.849 EAL: No free 2048 kB hugepages reported on node 1 00:21:37.849 [2024-05-15 01:08:50.141994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.108 [2024-05-15 01:08:50.261741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.677 01:08:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:38.677 01:08:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:21:38.677 01:08:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:38.677 01:08:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:38.936 01:08:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:38.936 01:08:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.936 01:08:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:39.196 01:08:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.196 01:08:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:39.196 01:08:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:39.455 nvme0n1 00:21:39.455 01:08:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:39.455 01:08:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.455 01:08:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:39.455 01:08:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.455 01:08:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:39.455 01:08:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:39.715 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:39.715 Zero copy mechanism will not be used. 00:21:39.715 Running I/O for 2 seconds... 00:21:39.715 [2024-05-15 01:08:51.960507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:39.715 [2024-05-15 01:08:51.960567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.715 [2024-05-15 01:08:51.960589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.715 [2024-05-15 01:08:51.975301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:39.715 [2024-05-15 01:08:51.975337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.715 [2024-05-15 01:08:51.975357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.715 [2024-05-15 01:08:51.989718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:39.715 [2024-05-15 01:08:51.989752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.715 [2024-05-15 01:08:51.989770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.715 [2024-05-15 01:08:52.004219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:39.715 [2024-05-15 01:08:52.004265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.715 [2024-05-15 01:08:52.004285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.715 [2024-05-15 01:08:52.018794] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:39.715 [2024-05-15 01:08:52.018828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.715 [2024-05-15 01:08:52.018847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.715 [2024-05-15 01:08:52.033153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:39.715 [2024-05-15 01:08:52.033183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.715 [2024-05-15 01:08:52.033199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.715 [2024-05-15 01:08:52.047671] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:39.715 [2024-05-15 01:08:52.047705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.715 [2024-05-15 01:08:52.047723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.715 [2024-05-15 01:08:52.062549] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:39.715 [2024-05-15 01:08:52.062582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.715 [2024-05-15 01:08:52.062600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.715 [2024-05-15 01:08:52.076924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:39.715 [2024-05-15 01:08:52.076980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.716 [2024-05-15 01:08:52.077002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.716 [2024-05-15 01:08:52.091462] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:39.716 [2024-05-15 01:08:52.091495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.716 [2024-05-15 01:08:52.091513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.716 [2024-05-15 01:08:52.106155] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:39.716 [2024-05-15 01:08:52.106185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.716 [2024-05-15 01:08:52.106201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.976 [2024-05-15 01:08:52.121127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:39.976 [2024-05-15 01:08:52.121156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.976 [2024-05-15 01:08:52.121173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.976 [2024-05-15 01:08:52.135752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:39.976 [2024-05-15 01:08:52.135785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.976 [2024-05-15 01:08:52.135804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.976 [2024-05-15 01:08:52.150178] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:39.976 [2024-05-15 01:08:52.150223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.976 [2024-05-15 01:08:52.150239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.976 [2024-05-15 01:08:52.164622] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:39.976 [2024-05-15 01:08:52.164655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.976 [2024-05-15 01:08:52.164674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.976 [2024-05-15 01:08:52.179302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:39.976 [2024-05-15 01:08:52.179336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.976 [2024-05-15 01:08:52.179354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.976 [2024-05-15 01:08:52.194092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:39.976 [2024-05-15 01:08:52.194120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.976 [2024-05-15 01:08:52.194136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.976 [2024-05-15 01:08:52.208561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:39.976 [2024-05-15 01:08:52.208599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.976 [2024-05-15 01:08:52.208619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.976 [2024-05-15 01:08:52.223174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:39.976 [2024-05-15 01:08:52.223202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.976 [2024-05-15 01:08:52.223236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.976 [2024-05-15 01:08:52.237667] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:39.976 [2024-05-15 01:08:52.237699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.976 [2024-05-15 01:08:52.237718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.976 [2024-05-15 01:08:52.252017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:39.976 [2024-05-15 01:08:52.252046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.976 [2024-05-15 01:08:52.252062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.976 [2024-05-15 01:08:52.266557] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:39.976 [2024-05-15 01:08:52.266590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.976 [2024-05-15 01:08:52.266609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.976 [2024-05-15 01:08:52.281092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:39.976 [2024-05-15 01:08:52.281122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.976 [2024-05-15 01:08:52.281139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.976 [2024-05-15 01:08:52.295452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:39.976 [2024-05-15 01:08:52.295485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.976 [2024-05-15 01:08:52.295504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.976 [2024-05-15 01:08:52.309825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:39.976 [2024-05-15 01:08:52.309860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.976 [2024-05-15 01:08:52.309878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.976 [2024-05-15 01:08:52.324476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:39.976 [2024-05-15 01:08:52.324521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.976 [2024-05-15 01:08:52.324545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.976 [2024-05-15 01:08:52.339273] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:39.976 [2024-05-15 01:08:52.339306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.976 [2024-05-15 01:08:52.339324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.976 [2024-05-15 01:08:52.353681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:39.976 [2024-05-15 01:08:52.353715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.976 [2024-05-15 01:08:52.353733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.976 [2024-05-15 01:08:52.368562] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:39.976 [2024-05-15 01:08:52.368607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.976 [2024-05-15 01:08:52.368624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.237 [2024-05-15 01:08:52.383121] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.237 [2024-05-15 01:08:52.383151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.237 [2024-05-15 01:08:52.383166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.237 [2024-05-15 01:08:52.397552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.237 [2024-05-15 01:08:52.397585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.237 [2024-05-15 01:08:52.397603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.237 [2024-05-15 01:08:52.412152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.237 [2024-05-15 01:08:52.412182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.237 [2024-05-15 01:08:52.412198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.237 [2024-05-15 01:08:52.427007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.237 [2024-05-15 01:08:52.427035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.237 [2024-05-15 01:08:52.427051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.237 [2024-05-15 01:08:52.441472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.237 [2024-05-15 01:08:52.441505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.237 [2024-05-15 01:08:52.441524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.237 [2024-05-15 01:08:52.455686] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.237 [2024-05-15 01:08:52.455726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.237 [2024-05-15 01:08:52.455745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.237 [2024-05-15 01:08:52.470078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.237 [2024-05-15 01:08:52.470120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.237 [2024-05-15 01:08:52.470136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.237 [2024-05-15 01:08:52.484638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.237 [2024-05-15 01:08:52.484671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.238 [2024-05-15 01:08:52.484689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.238 [2024-05-15 01:08:52.499056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.238 [2024-05-15 01:08:52.499086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.238 [2024-05-15 01:08:52.499103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.238 [2024-05-15 01:08:52.513645] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.238 [2024-05-15 01:08:52.513677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.238 [2024-05-15 01:08:52.513696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.238 [2024-05-15 01:08:52.528274] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.238 [2024-05-15 01:08:52.528320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.238 [2024-05-15 01:08:52.528338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.238 [2024-05-15 01:08:52.542699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.238 [2024-05-15 01:08:52.542732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.238 [2024-05-15 01:08:52.542750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.238 [2024-05-15 01:08:52.557079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.238 [2024-05-15 01:08:52.557121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.238 [2024-05-15 01:08:52.557137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.238 [2024-05-15 01:08:52.571546] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.238 [2024-05-15 01:08:52.571580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.238 [2024-05-15 01:08:52.571599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.238 [2024-05-15 01:08:52.586183] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.238 [2024-05-15 01:08:52.586227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.238 [2024-05-15 01:08:52.586247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.238 [2024-05-15 01:08:52.600580] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.238 [2024-05-15 01:08:52.600613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.238 [2024-05-15 01:08:52.600631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.238 [2024-05-15 01:08:52.614937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.238 [2024-05-15 01:08:52.614987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.238 [2024-05-15 01:08:52.615002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.238 [2024-05-15 01:08:52.629638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.238 [2024-05-15 01:08:52.629672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.238 [2024-05-15 01:08:52.629691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.497 [2024-05-15 01:08:52.644385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.497 [2024-05-15 01:08:52.644420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.497 [2024-05-15 01:08:52.644439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.498 [2024-05-15 01:08:52.659081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.498 [2024-05-15 01:08:52.659111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.498 [2024-05-15 01:08:52.659127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.498 [2024-05-15 01:08:52.673239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.498 [2024-05-15 01:08:52.673269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.498 [2024-05-15 01:08:52.673302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.498 [2024-05-15 01:08:52.687915] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.498 [2024-05-15 01:08:52.687957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.498 [2024-05-15 01:08:52.687997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.498 [2024-05-15 01:08:52.702405] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.498 [2024-05-15 01:08:52.702438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.498 [2024-05-15 01:08:52.702463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.498 [2024-05-15 01:08:52.716752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.498 [2024-05-15 01:08:52.716785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.498 [2024-05-15 01:08:52.716803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.498 [2024-05-15 01:08:52.731155] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.498 [2024-05-15 01:08:52.731185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.498 [2024-05-15 01:08:52.731201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.498 [2024-05-15 01:08:52.745637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.498 [2024-05-15 01:08:52.745670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.498 [2024-05-15 01:08:52.745688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.498 [2024-05-15 01:08:52.760010] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.498 [2024-05-15 01:08:52.760038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.498 [2024-05-15 01:08:52.760053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.498 [2024-05-15 01:08:52.774680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.498 [2024-05-15 01:08:52.774712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.498 [2024-05-15 01:08:52.774730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.498 [2024-05-15 01:08:52.789050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.498 [2024-05-15 01:08:52.789076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.498 [2024-05-15 01:08:52.789092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.498 [2024-05-15 01:08:52.803588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.498 [2024-05-15 01:08:52.803622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.498 [2024-05-15 01:08:52.803640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.498 [2024-05-15 01:08:52.818026] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.498 [2024-05-15 01:08:52.818053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.498 [2024-05-15 01:08:52.818069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.498 [2024-05-15 01:08:52.832629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.498 [2024-05-15 01:08:52.832667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.498 [2024-05-15 01:08:52.832686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.498 [2024-05-15 01:08:52.847155] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.498 [2024-05-15 01:08:52.847198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.498 [2024-05-15 01:08:52.847214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.498 [2024-05-15 01:08:52.861691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.498 [2024-05-15 01:08:52.861723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.498 [2024-05-15 01:08:52.861742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.498 [2024-05-15 01:08:52.876129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.498 [2024-05-15 01:08:52.876157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.498 [2024-05-15 01:08:52.876172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.757 [2024-05-15 01:08:52.890866] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.757 [2024-05-15 01:08:52.890900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.757 [2024-05-15 01:08:52.890918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.757 [2024-05-15 01:08:52.905761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.757 [2024-05-15 01:08:52.905794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.757 [2024-05-15 01:08:52.905812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.757 [2024-05-15 01:08:52.920197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.757 [2024-05-15 01:08:52.920225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.757 [2024-05-15 01:08:52.920240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.757 [2024-05-15 01:08:52.934697] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.757 [2024-05-15 01:08:52.934730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.757 [2024-05-15 01:08:52.934748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.757 [2024-05-15 01:08:52.949190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.757 [2024-05-15 01:08:52.949217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.757 [2024-05-15 01:08:52.949233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.757 [2024-05-15 01:08:52.963544] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.757 [2024-05-15 01:08:52.963577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.757 [2024-05-15 01:08:52.963595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.757 [2024-05-15 01:08:52.977899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.757 [2024-05-15 01:08:52.977939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.757 [2024-05-15 01:08:52.977959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.757 [2024-05-15 01:08:52.992515] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.757 [2024-05-15 01:08:52.992548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.757 [2024-05-15 01:08:52.992567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.757 [2024-05-15 01:08:53.006899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.757 [2024-05-15 01:08:53.006939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.757 [2024-05-15 01:08:53.006960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.757 [2024-05-15 01:08:53.021508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.757 [2024-05-15 01:08:53.021541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.757 [2024-05-15 01:08:53.021559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.757 [2024-05-15 01:08:53.035782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.757 [2024-05-15 01:08:53.035815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.757 [2024-05-15 01:08:53.035834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.757 [2024-05-15 01:08:53.050134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.757 [2024-05-15 01:08:53.050163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.757 [2024-05-15 01:08:53.050179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.757 [2024-05-15 01:08:53.064717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.757 [2024-05-15 01:08:53.064750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.757 [2024-05-15 01:08:53.064768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.757 [2024-05-15 01:08:53.078913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.758 [2024-05-15 01:08:53.078954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.758 [2024-05-15 01:08:53.078992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.758 [2024-05-15 01:08:53.093646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.758 [2024-05-15 01:08:53.093678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.758 [2024-05-15 01:08:53.093696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.758 [2024-05-15 01:08:53.108064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.758 [2024-05-15 01:08:53.108091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.758 [2024-05-15 01:08:53.108107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.758 [2024-05-15 01:08:53.122597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.758 [2024-05-15 01:08:53.122630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.758 [2024-05-15 01:08:53.122648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.758 [2024-05-15 01:08:53.137007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:40.758 [2024-05-15 01:08:53.137035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.758 [2024-05-15 01:08:53.137051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.017 [2024-05-15 01:08:53.151893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.017 [2024-05-15 01:08:53.151926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.017 [2024-05-15 01:08:53.151954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.017 [2024-05-15 01:08:53.166528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.017 [2024-05-15 01:08:53.166560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.017 [2024-05-15 01:08:53.166578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.017 [2024-05-15 01:08:53.180945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.017 [2024-05-15 01:08:53.180990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.017 [2024-05-15 01:08:53.181006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.017 [2024-05-15 01:08:53.195419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.017 [2024-05-15 01:08:53.195451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.017 [2024-05-15 01:08:53.195469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.017 [2024-05-15 01:08:53.209790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.017 [2024-05-15 01:08:53.209823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.017 [2024-05-15 01:08:53.209841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.017 [2024-05-15 01:08:53.224176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.017 [2024-05-15 01:08:53.224203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.017 [2024-05-15 01:08:53.224218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.017 [2024-05-15 01:08:53.238568] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.017 [2024-05-15 01:08:53.238601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.017 [2024-05-15 01:08:53.238620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.017 [2024-05-15 01:08:53.252942] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.017 [2024-05-15 01:08:53.252988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.017 [2024-05-15 01:08:53.253005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.017 [2024-05-15 01:08:53.267412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.017 [2024-05-15 01:08:53.267444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.017 [2024-05-15 01:08:53.267462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.017 [2024-05-15 01:08:53.281903] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.017 [2024-05-15 01:08:53.281945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.017 [2024-05-15 01:08:53.281980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.017 [2024-05-15 01:08:53.296226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.017 [2024-05-15 01:08:53.296256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.017 [2024-05-15 01:08:53.296287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.017 [2024-05-15 01:08:53.310610] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.017 [2024-05-15 01:08:53.310643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.017 [2024-05-15 01:08:53.310661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.017 [2024-05-15 01:08:53.324819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.017 [2024-05-15 01:08:53.324851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.017 [2024-05-15 01:08:53.324875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.017 [2024-05-15 01:08:53.339242] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.017 [2024-05-15 01:08:53.339287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.017 [2024-05-15 01:08:53.339306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.017 [2024-05-15 01:08:53.353939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.017 [2024-05-15 01:08:53.353984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.017 [2024-05-15 01:08:53.354000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.017 [2024-05-15 01:08:53.368451] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.017 [2024-05-15 01:08:53.368483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.017 [2024-05-15 01:08:53.368500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.017 [2024-05-15 01:08:53.383043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.017 [2024-05-15 01:08:53.383086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.017 [2024-05-15 01:08:53.383102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.017 [2024-05-15 01:08:53.397858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.017 [2024-05-15 01:08:53.397892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.017 [2024-05-15 01:08:53.397910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.276 [2024-05-15 01:08:53.412848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.276 [2024-05-15 01:08:53.412881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.276 [2024-05-15 01:08:53.412900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.276 [2024-05-15 01:08:53.427671] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.276 [2024-05-15 01:08:53.427705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.276 [2024-05-15 01:08:53.427723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.276 [2024-05-15 01:08:53.441990] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.276 [2024-05-15 01:08:53.442019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.276 [2024-05-15 01:08:53.442034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.276 [2024-05-15 01:08:53.456347] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.276 [2024-05-15 01:08:53.456387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.276 [2024-05-15 01:08:53.456406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.276 [2024-05-15 01:08:53.470673] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.276 [2024-05-15 01:08:53.470706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.276 [2024-05-15 01:08:53.470724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.276 [2024-05-15 01:08:53.484673] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.276 [2024-05-15 01:08:53.484705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.276 [2024-05-15 01:08:53.484724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.276 [2024-05-15 01:08:53.499033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.276 [2024-05-15 01:08:53.499062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.276 [2024-05-15 01:08:53.499079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.276 [2024-05-15 01:08:53.513548] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.276 [2024-05-15 01:08:53.513582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.276 [2024-05-15 01:08:53.513600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.276 [2024-05-15 01:08:53.527903] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.276 [2024-05-15 01:08:53.527944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.276 [2024-05-15 01:08:53.527978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.276 [2024-05-15 01:08:53.541745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.276 [2024-05-15 01:08:53.541777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.276 [2024-05-15 01:08:53.541795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.276 [2024-05-15 01:08:53.556396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.276 [2024-05-15 01:08:53.556429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.276 [2024-05-15 01:08:53.556447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.276 [2024-05-15 01:08:53.570490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.276 [2024-05-15 01:08:53.570524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.276 [2024-05-15 01:08:53.570542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.276 [2024-05-15 01:08:53.584701] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.276 [2024-05-15 01:08:53.584735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.276 [2024-05-15 01:08:53.584754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.276 [2024-05-15 01:08:53.599137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.276 [2024-05-15 01:08:53.599165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.276 [2024-05-15 01:08:53.599181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.276 [2024-05-15 01:08:53.613570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.276 [2024-05-15 01:08:53.613603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.276 [2024-05-15 01:08:53.613621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.276 [2024-05-15 01:08:53.627880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.276 [2024-05-15 01:08:53.627914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.276 [2024-05-15 01:08:53.627941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.276 [2024-05-15 01:08:53.642353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.276 [2024-05-15 01:08:53.642385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.276 [2024-05-15 01:08:53.642403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.276 [2024-05-15 01:08:53.656919] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.276 [2024-05-15 01:08:53.656959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.276 [2024-05-15 01:08:53.656991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.536 [2024-05-15 01:08:53.671802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.536 [2024-05-15 01:08:53.671842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.536 [2024-05-15 01:08:53.671860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.536 [2024-05-15 01:08:53.686006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.536 [2024-05-15 01:08:53.686035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.536 [2024-05-15 01:08:53.686051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.536 [2024-05-15 01:08:53.700565] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.536 [2024-05-15 01:08:53.700598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.536 [2024-05-15 01:08:53.700623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.536 [2024-05-15 01:08:53.714656] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.536 [2024-05-15 01:08:53.714689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.536 [2024-05-15 01:08:53.714707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.536 [2024-05-15 01:08:53.729050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.536 [2024-05-15 01:08:53.729077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.536 [2024-05-15 01:08:53.729092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.536 [2024-05-15 01:08:53.743594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.536 [2024-05-15 01:08:53.743627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.536 [2024-05-15 01:08:53.743646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.536 [2024-05-15 01:08:53.757995] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.536 [2024-05-15 01:08:53.758024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.536 [2024-05-15 01:08:53.758040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.537 [2024-05-15 01:08:53.772412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.537 [2024-05-15 01:08:53.772445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.537 [2024-05-15 01:08:53.772463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.537 [2024-05-15 01:08:53.786994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.537 [2024-05-15 01:08:53.787022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.537 [2024-05-15 01:08:53.787038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.537 [2024-05-15 01:08:53.801375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.537 [2024-05-15 01:08:53.801408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.537 [2024-05-15 01:08:53.801426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.537 [2024-05-15 01:08:53.816114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.537 [2024-05-15 01:08:53.816144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.537 [2024-05-15 01:08:53.816160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.537 [2024-05-15 01:08:53.830673] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.537 [2024-05-15 01:08:53.830707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.537 [2024-05-15 01:08:53.830725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.537 [2024-05-15 01:08:53.845076] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.537 [2024-05-15 01:08:53.845107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.537 [2024-05-15 01:08:53.845124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.537 [2024-05-15 01:08:53.859540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.537 [2024-05-15 01:08:53.859574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.537 [2024-05-15 01:08:53.859593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.537 [2024-05-15 01:08:53.873984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.537 [2024-05-15 01:08:53.874015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.537 [2024-05-15 01:08:53.874031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.537 [2024-05-15 01:08:53.888594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.537 [2024-05-15 01:08:53.888629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.537 [2024-05-15 01:08:53.888648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.537 [2024-05-15 01:08:53.903174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.537 [2024-05-15 01:08:53.903204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.537 [2024-05-15 01:08:53.903219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.537 [2024-05-15 01:08:53.917767] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.537 [2024-05-15 01:08:53.917801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.537 [2024-05-15 01:08:53.917819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.795 [2024-05-15 01:08:53.932714] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.795 [2024-05-15 01:08:53.932749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.795 [2024-05-15 01:08:53.932768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.795 [2024-05-15 01:08:53.947009] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xded850) 00:21:41.795 [2024-05-15 01:08:53.947038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.795 [2024-05-15 01:08:53.947060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.795 00:21:41.795 Latency(us) 00:21:41.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.795 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:41.795 nvme0n1 : 2.01 2139.18 267.40 0.00 0.00 7474.86 6747.78 15243.19 00:21:41.795 =================================================================================================================== 00:21:41.795 Total : 2139.18 267.40 0.00 0.00 7474.86 6747.78 15243.19 00:21:41.795 0 00:21:41.795 01:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:41.795 01:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:41.795 01:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:41.795 01:08:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:41.795 | .driver_specific 00:21:41.795 | .nvme_error 00:21:41.795 | .status_code 00:21:41.795 | .command_transient_transport_error' 00:21:42.054 01:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 138 > 0 )) 00:21:42.054 01:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1338689 00:21:42.054 01:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1338689 ']' 00:21:42.055 01:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1338689 00:21:42.055 01:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:21:42.055 01:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:42.055 01:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1338689 00:21:42.055 01:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:42.055 01:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:42.055 01:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1338689' 00:21:42.055 killing process with pid 1338689 00:21:42.055 01:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1338689 00:21:42.055 Received shutdown signal, test time was about 2.000000 seconds 00:21:42.055 00:21:42.055 Latency(us) 00:21:42.055 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.055 =================================================================================================================== 00:21:42.055 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:42.055 01:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1338689 00:21:42.313 01:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:21:42.313 01:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:42.313 01:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:42.313 01:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:42.313 01:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:42.313 01:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1339217 00:21:42.313 01:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:21:42.313 01:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1339217 /var/tmp/bperf.sock 00:21:42.313 01:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1339217 ']' 00:21:42.313 01:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:42.313 01:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:42.313 01:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:42.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:42.313 01:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:42.313 01:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:42.313 [2024-05-15 01:08:54.570559] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:21:42.313 [2024-05-15 01:08:54.570655] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1339217 ] 00:21:42.313 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.313 [2024-05-15 01:08:54.644624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.572 [2024-05-15 01:08:54.755010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.572 01:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:42.572 01:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:21:42.572 01:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:42.572 01:08:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:42.830 01:08:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:42.830 01:08:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.830 01:08:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:42.830 01:08:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.830 01:08:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:42.830 01:08:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:43.088 nvme0n1 00:21:43.088 01:08:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:43.088 01:08:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.088 01:08:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:43.088 01:08:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.088 01:08:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:43.088 01:08:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:43.347 Running I/O for 2 seconds... 00:21:43.347 [2024-05-15 01:08:55.565732] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.347 [2024-05-15 01:08:55.566020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.347 [2024-05-15 01:08:55.566062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.347 [2024-05-15 01:08:55.579853] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.347 [2024-05-15 01:08:55.580122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.347 [2024-05-15 01:08:55.580166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.347 [2024-05-15 01:08:55.593948] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.347 [2024-05-15 01:08:55.594241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.347 [2024-05-15 01:08:55.594271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.347 [2024-05-15 01:08:55.608090] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.347 [2024-05-15 01:08:55.608342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.347 [2024-05-15 01:08:55.608375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.347 [2024-05-15 01:08:55.622141] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.347 [2024-05-15 01:08:55.622442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.347 [2024-05-15 01:08:55.622474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.347 [2024-05-15 01:08:55.636109] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.347 [2024-05-15 01:08:55.636361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.347 [2024-05-15 01:08:55.636392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.347 [2024-05-15 01:08:55.650028] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.347 [2024-05-15 01:08:55.650314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.347 [2024-05-15 01:08:55.650345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.347 [2024-05-15 01:08:55.663980] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.347 [2024-05-15 01:08:55.664323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.347 [2024-05-15 01:08:55.664355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.347 [2024-05-15 01:08:55.677956] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.347 [2024-05-15 01:08:55.678276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.347 [2024-05-15 01:08:55.678307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.348 [2024-05-15 01:08:55.691972] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.348 [2024-05-15 01:08:55.692325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.348 [2024-05-15 01:08:55.692362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.348 [2024-05-15 01:08:55.706013] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.348 [2024-05-15 01:08:55.706286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.348 [2024-05-15 01:08:55.706319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.348 [2024-05-15 01:08:55.719879] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.348 [2024-05-15 01:08:55.720262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.348 [2024-05-15 01:08:55.720294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.348 [2024-05-15 01:08:55.733956] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.348 [2024-05-15 01:08:55.734245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.348 [2024-05-15 01:08:55.734275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.610 [2024-05-15 01:08:55.748124] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.610 [2024-05-15 01:08:55.748422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.610 [2024-05-15 01:08:55.748454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.610 [2024-05-15 01:08:55.762107] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.610 [2024-05-15 01:08:55.762366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.610 [2024-05-15 01:08:55.762398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.610 [2024-05-15 01:08:55.776215] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.610 [2024-05-15 01:08:55.776492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.610 [2024-05-15 01:08:55.776523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.610 [2024-05-15 01:08:55.790126] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.610 [2024-05-15 01:08:55.790415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.610 [2024-05-15 01:08:55.790446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.610 [2024-05-15 01:08:55.804085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.610 [2024-05-15 01:08:55.804337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.610 [2024-05-15 01:08:55.804368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.610 [2024-05-15 01:08:55.818104] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.610 [2024-05-15 01:08:55.818370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.610 [2024-05-15 01:08:55.818401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.610 [2024-05-15 01:08:55.832061] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.610 [2024-05-15 01:08:55.832304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.611 [2024-05-15 01:08:55.832348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.611 [2024-05-15 01:08:55.846028] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.611 [2024-05-15 01:08:55.846306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.611 [2024-05-15 01:08:55.846353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.611 [2024-05-15 01:08:55.859965] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.611 [2024-05-15 01:08:55.860192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.611 [2024-05-15 01:08:55.860235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.611 [2024-05-15 01:08:55.873882] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.611 [2024-05-15 01:08:55.874196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.611 [2024-05-15 01:08:55.874224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.611 [2024-05-15 01:08:55.887753] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.611 [2024-05-15 01:08:55.888019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.611 [2024-05-15 01:08:55.888065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.611 [2024-05-15 01:08:55.901689] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.611 [2024-05-15 01:08:55.901951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.611 [2024-05-15 01:08:55.901998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.611 [2024-05-15 01:08:55.915619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.611 [2024-05-15 01:08:55.915862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.611 [2024-05-15 01:08:55.915894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.611 [2024-05-15 01:08:55.929497] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.611 [2024-05-15 01:08:55.929790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.611 [2024-05-15 01:08:55.929822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.611 [2024-05-15 01:08:55.943423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.611 [2024-05-15 01:08:55.943676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.611 [2024-05-15 01:08:55.943707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.611 [2024-05-15 01:08:55.957391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.611 [2024-05-15 01:08:55.957647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.611 [2024-05-15 01:08:55.957678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.611 [2024-05-15 01:08:55.971297] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.611 [2024-05-15 01:08:55.971543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.611 [2024-05-15 01:08:55.971573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.611 [2024-05-15 01:08:55.985207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.611 [2024-05-15 01:08:55.985467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.611 [2024-05-15 01:08:55.985498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.611 [2024-05-15 01:08:55.999267] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.611 [2024-05-15 01:08:55.999562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.611 [2024-05-15 01:08:55.999593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.875 [2024-05-15 01:08:56.013366] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.875 [2024-05-15 01:08:56.013647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.875 [2024-05-15 01:08:56.013678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.875 [2024-05-15 01:08:56.027262] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.875 [2024-05-15 01:08:56.027546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.875 [2024-05-15 01:08:56.027577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.875 [2024-05-15 01:08:56.041151] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.875 [2024-05-15 01:08:56.041449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.875 [2024-05-15 01:08:56.041480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.875 [2024-05-15 01:08:56.055145] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.875 [2024-05-15 01:08:56.055431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.875 [2024-05-15 01:08:56.055467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.875 [2024-05-15 01:08:56.069175] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.875 [2024-05-15 01:08:56.069460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.875 [2024-05-15 01:08:56.069492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.875 [2024-05-15 01:08:56.083247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.875 [2024-05-15 01:08:56.083534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.875 [2024-05-15 01:08:56.083565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.875 [2024-05-15 01:08:56.097168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.875 [2024-05-15 01:08:56.097471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.875 [2024-05-15 01:08:56.097501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.875 [2024-05-15 01:08:56.111165] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.875 [2024-05-15 01:08:56.111454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.875 [2024-05-15 01:08:56.111486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.875 [2024-05-15 01:08:56.125206] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.875 [2024-05-15 01:08:56.125504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.875 [2024-05-15 01:08:56.125535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.875 [2024-05-15 01:08:56.139209] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.875 [2024-05-15 01:08:56.139507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.875 [2024-05-15 01:08:56.139537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.875 [2024-05-15 01:08:56.153167] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.875 [2024-05-15 01:08:56.153437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.875 [2024-05-15 01:08:56.153469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.875 [2024-05-15 01:08:56.167094] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.875 [2024-05-15 01:08:56.167340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.875 [2024-05-15 01:08:56.167372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.875 [2024-05-15 01:08:56.180995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.875 [2024-05-15 01:08:56.181343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.875 [2024-05-15 01:08:56.181374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.875 [2024-05-15 01:08:56.194928] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.875 [2024-05-15 01:08:56.195177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.875 [2024-05-15 01:08:56.195219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.875 [2024-05-15 01:08:56.208813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.875 [2024-05-15 01:08:56.209073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.875 [2024-05-15 01:08:56.209100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.875 [2024-05-15 01:08:56.222748] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.875 [2024-05-15 01:08:56.223011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.875 [2024-05-15 01:08:56.223038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.875 [2024-05-15 01:08:56.236698] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.875 [2024-05-15 01:08:56.236952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.875 [2024-05-15 01:08:56.236998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.875 [2024-05-15 01:08:56.250673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.875 [2024-05-15 01:08:56.250916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.875 [2024-05-15 01:08:56.250956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:43.875 [2024-05-15 01:08:56.264609] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:43.876 [2024-05-15 01:08:56.264865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.876 [2024-05-15 01:08:56.264896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.137 [2024-05-15 01:08:56.278575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.137 [2024-05-15 01:08:56.278869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.137 [2024-05-15 01:08:56.278900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.137 [2024-05-15 01:08:56.292552] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.137 [2024-05-15 01:08:56.292806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.137 [2024-05-15 01:08:56.292837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.137 [2024-05-15 01:08:56.306532] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.137 [2024-05-15 01:08:56.306786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.137 [2024-05-15 01:08:56.306819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.137 [2024-05-15 01:08:56.319896] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.137 [2024-05-15 01:08:56.320265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.137 [2024-05-15 01:08:56.320293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.137 [2024-05-15 01:08:56.333928] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.137 [2024-05-15 01:08:56.334281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.137 [2024-05-15 01:08:56.334308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.137 [2024-05-15 01:08:56.347857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.137 [2024-05-15 01:08:56.348137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.137 [2024-05-15 01:08:56.348164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.137 [2024-05-15 01:08:56.361810] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.137 [2024-05-15 01:08:56.362093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.137 [2024-05-15 01:08:56.362120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.137 [2024-05-15 01:08:56.375701] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.137 [2024-05-15 01:08:56.375950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.137 [2024-05-15 01:08:56.375992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.137 [2024-05-15 01:08:56.389562] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.137 [2024-05-15 01:08:56.389845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.137 [2024-05-15 01:08:56.389871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.137 [2024-05-15 01:08:56.403555] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.137 [2024-05-15 01:08:56.403838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.137 [2024-05-15 01:08:56.403865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.137 [2024-05-15 01:08:56.417634] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.137 [2024-05-15 01:08:56.417995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.137 [2024-05-15 01:08:56.418027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.137 [2024-05-15 01:08:56.431474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.137 [2024-05-15 01:08:56.431828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.137 [2024-05-15 01:08:56.431859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.137 [2024-05-15 01:08:56.445270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.137 [2024-05-15 01:08:56.445545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.137 [2024-05-15 01:08:56.445573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.137 [2024-05-15 01:08:56.459148] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.137 [2024-05-15 01:08:56.459403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.137 [2024-05-15 01:08:56.459430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.137 [2024-05-15 01:08:56.472996] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.137 [2024-05-15 01:08:56.473263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.137 [2024-05-15 01:08:56.473290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.137 [2024-05-15 01:08:56.487068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.137 [2024-05-15 01:08:56.487349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.137 [2024-05-15 01:08:56.487376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.137 [2024-05-15 01:08:56.501149] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.137 [2024-05-15 01:08:56.501430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.137 [2024-05-15 01:08:56.501457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.137 [2024-05-15 01:08:56.515041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.137 [2024-05-15 01:08:56.515302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.137 [2024-05-15 01:08:56.515343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.137 [2024-05-15 01:08:56.528733] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.137 [2024-05-15 01:08:56.528994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.137 [2024-05-15 01:08:56.529022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.398 [2024-05-15 01:08:56.542474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.398 [2024-05-15 01:08:56.542750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.398 [2024-05-15 01:08:56.542778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.398 [2024-05-15 01:08:56.556362] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.398 [2024-05-15 01:08:56.556639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.398 [2024-05-15 01:08:56.556667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.398 [2024-05-15 01:08:56.570176] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.398 [2024-05-15 01:08:56.570425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.398 [2024-05-15 01:08:56.570452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.398 [2024-05-15 01:08:56.584120] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.398 [2024-05-15 01:08:56.584376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.398 [2024-05-15 01:08:56.584418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.398 [2024-05-15 01:08:56.597903] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.398 [2024-05-15 01:08:56.598168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.398 [2024-05-15 01:08:56.598196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.398 [2024-05-15 01:08:56.611976] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.398 [2024-05-15 01:08:56.612258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.398 [2024-05-15 01:08:56.612300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.398 [2024-05-15 01:08:56.625840] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.398 [2024-05-15 01:08:56.626105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.398 [2024-05-15 01:08:56.626133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.399 [2024-05-15 01:08:56.639791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.399 [2024-05-15 01:08:56.640068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.399 [2024-05-15 01:08:56.640095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.399 [2024-05-15 01:08:56.653800] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.399 [2024-05-15 01:08:56.654075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.399 [2024-05-15 01:08:56.654103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.399 [2024-05-15 01:08:56.667682] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.399 [2024-05-15 01:08:56.667944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.399 [2024-05-15 01:08:56.667972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.399 [2024-05-15 01:08:56.681596] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.399 [2024-05-15 01:08:56.681901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.399 [2024-05-15 01:08:56.681928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.399 [2024-05-15 01:08:56.695448] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.399 [2024-05-15 01:08:56.695760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.399 [2024-05-15 01:08:56.695802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.399 [2024-05-15 01:08:56.709377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.399 [2024-05-15 01:08:56.709632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.399 [2024-05-15 01:08:56.709660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.399 [2024-05-15 01:08:56.723136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.399 [2024-05-15 01:08:56.723449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.399 [2024-05-15 01:08:56.723477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.399 [2024-05-15 01:08:56.737054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.399 [2024-05-15 01:08:56.737315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.399 [2024-05-15 01:08:56.737342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.399 [2024-05-15 01:08:56.751057] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.399 [2024-05-15 01:08:56.751323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.399 [2024-05-15 01:08:56.751366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.399 [2024-05-15 01:08:56.764896] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.399 [2024-05-15 01:08:56.765214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.399 [2024-05-15 01:08:56.765240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.399 [2024-05-15 01:08:56.778774] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.399 [2024-05-15 01:08:56.779064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.399 [2024-05-15 01:08:56.779098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.661 [2024-05-15 01:08:56.792652] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.661 [2024-05-15 01:08:56.792909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.661 [2024-05-15 01:08:56.792946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.661 [2024-05-15 01:08:56.806628] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.661 [2024-05-15 01:08:56.806898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.661 [2024-05-15 01:08:56.806926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.661 [2024-05-15 01:08:56.820635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.661 [2024-05-15 01:08:56.820918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.661 [2024-05-15 01:08:56.820952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.661 [2024-05-15 01:08:56.834641] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.661 [2024-05-15 01:08:56.834913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.661 [2024-05-15 01:08:56.834948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.661 [2024-05-15 01:08:56.848518] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.661 [2024-05-15 01:08:56.848821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.661 [2024-05-15 01:08:56.848849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.661 [2024-05-15 01:08:56.862038] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.661 [2024-05-15 01:08:56.862299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.661 [2024-05-15 01:08:56.862326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.661 [2024-05-15 01:08:56.876043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.661 [2024-05-15 01:08:56.876304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.661 [2024-05-15 01:08:56.876332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.661 [2024-05-15 01:08:56.889914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.661 [2024-05-15 01:08:56.890176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.661 [2024-05-15 01:08:56.890203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.661 [2024-05-15 01:08:56.903663] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.661 [2024-05-15 01:08:56.903968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.661 [2024-05-15 01:08:56.904000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.661 [2024-05-15 01:08:56.917842] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.661 [2024-05-15 01:08:56.918121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.661 [2024-05-15 01:08:56.918149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.661 [2024-05-15 01:08:56.931764] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.661 [2024-05-15 01:08:56.932061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.661 [2024-05-15 01:08:56.932088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.661 [2024-05-15 01:08:56.945735] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.661 [2024-05-15 01:08:56.946007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.661 [2024-05-15 01:08:56.946036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.661 [2024-05-15 01:08:56.959818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.661 [2024-05-15 01:08:56.960122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.661 [2024-05-15 01:08:56.960149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.661 [2024-05-15 01:08:56.973687] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.661 [2024-05-15 01:08:56.973953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.661 [2024-05-15 01:08:56.973980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.661 [2024-05-15 01:08:56.987525] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.661 [2024-05-15 01:08:56.987800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.661 [2024-05-15 01:08:56.987827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.661 [2024-05-15 01:08:57.001621] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.661 [2024-05-15 01:08:57.001895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.661 [2024-05-15 01:08:57.001923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.661 [2024-05-15 01:08:57.015451] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.661 [2024-05-15 01:08:57.015705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.661 [2024-05-15 01:08:57.015747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.661 [2024-05-15 01:08:57.029316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.661 [2024-05-15 01:08:57.029582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.661 [2024-05-15 01:08:57.029609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.661 [2024-05-15 01:08:57.043234] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.661 [2024-05-15 01:08:57.043545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.661 [2024-05-15 01:08:57.043571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.923 [2024-05-15 01:08:57.056920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.924 [2024-05-15 01:08:57.057170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.924 [2024-05-15 01:08:57.057197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.924 [2024-05-15 01:08:57.070709] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.924 [2024-05-15 01:08:57.070991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.924 [2024-05-15 01:08:57.071019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.924 [2024-05-15 01:08:57.084706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.924 [2024-05-15 01:08:57.084952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.924 [2024-05-15 01:08:57.084980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.924 [2024-05-15 01:08:57.098590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.924 [2024-05-15 01:08:57.098860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.924 [2024-05-15 01:08:57.098901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.924 [2024-05-15 01:08:57.112565] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.924 [2024-05-15 01:08:57.112821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.924 [2024-05-15 01:08:57.112848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.924 [2024-05-15 01:08:57.126493] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.924 [2024-05-15 01:08:57.126731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.924 [2024-05-15 01:08:57.126759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.924 [2024-05-15 01:08:57.140299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.924 [2024-05-15 01:08:57.140569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.924 [2024-05-15 01:08:57.140605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.924 [2024-05-15 01:08:57.154157] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.924 [2024-05-15 01:08:57.154394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.924 [2024-05-15 01:08:57.154421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.924 [2024-05-15 01:08:57.168041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.924 [2024-05-15 01:08:57.168308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.924 [2024-05-15 01:08:57.168334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.924 [2024-05-15 01:08:57.181984] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.924 [2024-05-15 01:08:57.182233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.924 [2024-05-15 01:08:57.182260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.924 [2024-05-15 01:08:57.195859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.924 [2024-05-15 01:08:57.196156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.924 [2024-05-15 01:08:57.196183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.924 [2024-05-15 01:08:57.209709] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.924 [2024-05-15 01:08:57.209981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.924 [2024-05-15 01:08:57.210009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.924 [2024-05-15 01:08:57.223680] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.924 [2024-05-15 01:08:57.223945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.924 [2024-05-15 01:08:57.223978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.924 [2024-05-15 01:08:57.237851] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.924 [2024-05-15 01:08:57.238130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.924 [2024-05-15 01:08:57.238157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.924 [2024-05-15 01:08:57.251831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.924 [2024-05-15 01:08:57.252127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.924 [2024-05-15 01:08:57.252154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.924 [2024-05-15 01:08:57.265784] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.924 [2024-05-15 01:08:57.266060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.924 [2024-05-15 01:08:57.266092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.924 [2024-05-15 01:08:57.279482] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.924 [2024-05-15 01:08:57.279786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.924 [2024-05-15 01:08:57.279812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.924 [2024-05-15 01:08:57.293528] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.924 [2024-05-15 01:08:57.293828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.924 [2024-05-15 01:08:57.293855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.924 [2024-05-15 01:08:57.307414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:44.924 [2024-05-15 01:08:57.307701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.924 [2024-05-15 01:08:57.307742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:45.185 [2024-05-15 01:08:57.321213] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:45.185 [2024-05-15 01:08:57.321504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.185 [2024-05-15 01:08:57.321532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:45.185 [2024-05-15 01:08:57.335115] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:45.185 [2024-05-15 01:08:57.335366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.185 [2024-05-15 01:08:57.335409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:45.185 [2024-05-15 01:08:57.349078] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:45.185 [2024-05-15 01:08:57.349352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.185 [2024-05-15 01:08:57.349379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:45.185 [2024-05-15 01:08:57.362956] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:45.185 [2024-05-15 01:08:57.363197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.185 [2024-05-15 01:08:57.363224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:45.185 [2024-05-15 01:08:57.376819] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:45.185 [2024-05-15 01:08:57.377114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.185 [2024-05-15 01:08:57.377141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:45.185 [2024-05-15 01:08:57.390756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:45.185 [2024-05-15 01:08:57.391051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.185 [2024-05-15 01:08:57.391079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:45.185 [2024-05-15 01:08:57.404712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:45.185 [2024-05-15 01:08:57.404984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.185 [2024-05-15 01:08:57.405011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:45.185 [2024-05-15 01:08:57.418677] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:45.185 [2024-05-15 01:08:57.418946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.185 [2024-05-15 01:08:57.418973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:45.185 [2024-05-15 01:08:57.432514] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:45.186 [2024-05-15 01:08:57.432785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.186 [2024-05-15 01:08:57.432812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:45.186 [2024-05-15 01:08:57.446759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:45.186 [2024-05-15 01:08:57.447046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.186 [2024-05-15 01:08:57.447073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:45.186 [2024-05-15 01:08:57.460645] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:45.186 [2024-05-15 01:08:57.460936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.186 [2024-05-15 01:08:57.460977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:45.186 [2024-05-15 01:08:57.474727] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:45.186 [2024-05-15 01:08:57.475003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.186 [2024-05-15 01:08:57.475030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:45.186 [2024-05-15 01:08:57.488808] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:45.186 [2024-05-15 01:08:57.489102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.186 [2024-05-15 01:08:57.489129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:45.186 [2024-05-15 01:08:57.502793] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:45.186 [2024-05-15 01:08:57.503054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.186 [2024-05-15 01:08:57.503081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:45.186 [2024-05-15 01:08:57.516835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:45.186 [2024-05-15 01:08:57.517108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.186 [2024-05-15 01:08:57.517135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:45.186 [2024-05-15 01:08:57.530842] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:45.186 [2024-05-15 01:08:57.531138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.186 [2024-05-15 01:08:57.531165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:45.186 [2024-05-15 01:08:57.544435] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:45.186 [2024-05-15 01:08:57.544696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.186 [2024-05-15 01:08:57.544722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:45.186 [2024-05-15 01:08:57.558298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe190) with pdu=0x2000190fdeb0 00:21:45.186 [2024-05-15 01:08:57.558570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.186 [2024-05-15 01:08:57.558598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:45.186 00:21:45.186 Latency(us) 00:21:45.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.186 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:45.186 nvme0n1 : 2.01 18319.19 71.56 0.00 0.00 6969.90 2949.12 14272.28 00:21:45.186 =================================================================================================================== 00:21:45.186 Total : 18319.19 71.56 0.00 0.00 6969.90 2949.12 14272.28 00:21:45.186 0 00:21:45.446 01:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:45.446 01:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:45.446 01:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:45.446 01:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:45.446 | .driver_specific 00:21:45.446 | .nvme_error 00:21:45.446 | .status_code 00:21:45.446 | .command_transient_transport_error' 00:21:45.446 01:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:21:45.446 01:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1339217 00:21:45.446 01:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1339217 ']' 00:21:45.446 01:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1339217 00:21:45.446 01:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:21:45.446 01:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:45.446 01:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1339217 00:21:45.705 01:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:45.705 01:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:45.705 01:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1339217' 00:21:45.705 killing process with pid 1339217 00:21:45.705 01:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1339217 00:21:45.705 Received shutdown signal, test time was about 2.000000 seconds 00:21:45.705 00:21:45.705 Latency(us) 00:21:45.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.705 =================================================================================================================== 00:21:45.705 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:45.705 01:08:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1339217 00:21:45.964 01:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:21:45.964 01:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:45.964 01:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:45.964 01:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:45.964 01:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:45.964 01:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1339744 00:21:45.964 01:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:21:45.964 01:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1339744 /var/tmp/bperf.sock 00:21:45.964 01:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1339744 ']' 00:21:45.964 01:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:45.964 01:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:45.964 01:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:45.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:45.964 01:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:45.964 01:08:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:45.964 [2024-05-15 01:08:58.170233] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:21:45.964 [2024-05-15 01:08:58.170311] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1339744 ] 00:21:45.964 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:45.964 Zero copy mechanism will not be used. 00:21:45.964 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.964 [2024-05-15 01:08:58.242195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.964 [2024-05-15 01:08:58.356042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.902 01:08:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:46.902 01:08:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:21:46.902 01:08:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:46.902 01:08:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:47.162 01:08:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:47.162 01:08:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.162 01:08:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:47.162 01:08:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.162 01:08:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:47.162 01:08:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:47.421 nvme0n1 00:21:47.421 01:08:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:47.421 01:08:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.421 01:08:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:47.421 01:08:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.421 01:08:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:47.421 01:08:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:47.680 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:47.680 Zero copy mechanism will not be used. 00:21:47.680 Running I/O for 2 seconds... 00:21:47.680 [2024-05-15 01:08:59.917174] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:47.680 [2024-05-15 01:08:59.917659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.680 [2024-05-15 01:08:59.917699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:47.680 [2024-05-15 01:08:59.940849] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:47.680 [2024-05-15 01:08:59.941518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.680 [2024-05-15 01:08:59.941553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:47.680 [2024-05-15 01:08:59.963068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:47.680 [2024-05-15 01:08:59.963464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.680 [2024-05-15 01:08:59.963508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:47.680 [2024-05-15 01:08:59.984579] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:47.680 [2024-05-15 01:08:59.985052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.680 [2024-05-15 01:08:59.985093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:47.680 [2024-05-15 01:09:00.006216] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:47.680 [2024-05-15 01:09:00.006656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.680 [2024-05-15 01:09:00.006688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:47.680 [2024-05-15 01:09:00.024539] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:47.680 [2024-05-15 01:09:00.025032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.680 [2024-05-15 01:09:00.025082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:47.680 [2024-05-15 01:09:00.042650] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:47.680 [2024-05-15 01:09:00.043127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.680 [2024-05-15 01:09:00.043165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:47.680 [2024-05-15 01:09:00.061263] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:47.680 [2024-05-15 01:09:00.061806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.680 [2024-05-15 01:09:00.061855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:47.939 [2024-05-15 01:09:00.083241] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:47.939 [2024-05-15 01:09:00.083756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.939 [2024-05-15 01:09:00.083784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:47.939 [2024-05-15 01:09:00.106174] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:47.939 [2024-05-15 01:09:00.106674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.939 [2024-05-15 01:09:00.106716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:47.939 [2024-05-15 01:09:00.130801] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:47.939 [2024-05-15 01:09:00.131457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.939 [2024-05-15 01:09:00.131484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:47.939 [2024-05-15 01:09:00.155197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:47.939 [2024-05-15 01:09:00.155786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.939 [2024-05-15 01:09:00.155813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:47.939 [2024-05-15 01:09:00.179608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:47.939 [2024-05-15 01:09:00.180020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.939 [2024-05-15 01:09:00.180060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:47.939 [2024-05-15 01:09:00.204319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:47.939 [2024-05-15 01:09:00.204900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.939 [2024-05-15 01:09:00.204948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:47.939 [2024-05-15 01:09:00.229320] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:47.939 [2024-05-15 01:09:00.229710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.939 [2024-05-15 01:09:00.229751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:47.939 [2024-05-15 01:09:00.252609] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:47.939 [2024-05-15 01:09:00.253127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.939 [2024-05-15 01:09:00.253169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:47.939 [2024-05-15 01:09:00.276286] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:47.939 [2024-05-15 01:09:00.276746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.939 [2024-05-15 01:09:00.276788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:47.939 [2024-05-15 01:09:00.299941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:47.939 [2024-05-15 01:09:00.300434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.940 [2024-05-15 01:09:00.300462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:47.940 [2024-05-15 01:09:00.324430] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:47.940 [2024-05-15 01:09:00.324887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.940 [2024-05-15 01:09:00.324913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.199 [2024-05-15 01:09:00.347864] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.199 [2024-05-15 01:09:00.348356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.199 [2024-05-15 01:09:00.348383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.199 [2024-05-15 01:09:00.372506] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.199 [2024-05-15 01:09:00.373092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.199 [2024-05-15 01:09:00.373122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.199 [2024-05-15 01:09:00.395638] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.199 [2024-05-15 01:09:00.396158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.199 [2024-05-15 01:09:00.396201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.199 [2024-05-15 01:09:00.419462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.199 [2024-05-15 01:09:00.420004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.199 [2024-05-15 01:09:00.420036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.199 [2024-05-15 01:09:00.444484] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.199 [2024-05-15 01:09:00.445132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.199 [2024-05-15 01:09:00.445160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.199 [2024-05-15 01:09:00.468571] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.199 [2024-05-15 01:09:00.469147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.199 [2024-05-15 01:09:00.469175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.199 [2024-05-15 01:09:00.492281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.199 [2024-05-15 01:09:00.492760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.199 [2024-05-15 01:09:00.492788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.199 [2024-05-15 01:09:00.515592] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.199 [2024-05-15 01:09:00.516164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.199 [2024-05-15 01:09:00.516193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.199 [2024-05-15 01:09:00.538789] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.199 [2024-05-15 01:09:00.539372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.199 [2024-05-15 01:09:00.539400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.199 [2024-05-15 01:09:00.562550] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.199 [2024-05-15 01:09:00.563166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.199 [2024-05-15 01:09:00.563211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.199 [2024-05-15 01:09:00.586856] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.199 [2024-05-15 01:09:00.587329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.199 [2024-05-15 01:09:00.587372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.459 [2024-05-15 01:09:00.609387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.459 [2024-05-15 01:09:00.610058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.459 [2024-05-15 01:09:00.610088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.459 [2024-05-15 01:09:00.632185] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.459 [2024-05-15 01:09:00.632814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.459 [2024-05-15 01:09:00.632841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.459 [2024-05-15 01:09:00.656941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.459 [2024-05-15 01:09:00.657406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.459 [2024-05-15 01:09:00.657434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.459 [2024-05-15 01:09:00.680817] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.459 [2024-05-15 01:09:00.681320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.459 [2024-05-15 01:09:00.681348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.459 [2024-05-15 01:09:00.704395] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.459 [2024-05-15 01:09:00.704779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.459 [2024-05-15 01:09:00.704808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.459 [2024-05-15 01:09:00.727626] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.459 [2024-05-15 01:09:00.728038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.459 [2024-05-15 01:09:00.728068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.459 [2024-05-15 01:09:00.752327] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.459 [2024-05-15 01:09:00.752894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.459 [2024-05-15 01:09:00.752945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.459 [2024-05-15 01:09:00.776243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.459 [2024-05-15 01:09:00.776882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.459 [2024-05-15 01:09:00.776926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.459 [2024-05-15 01:09:00.798295] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.459 [2024-05-15 01:09:00.798717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.459 [2024-05-15 01:09:00.798760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.459 [2024-05-15 01:09:00.822279] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.459 [2024-05-15 01:09:00.822699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.459 [2024-05-15 01:09:00.822728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.459 [2024-05-15 01:09:00.845400] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.459 [2024-05-15 01:09:00.846042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.459 [2024-05-15 01:09:00.846070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.719 [2024-05-15 01:09:00.869228] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.719 [2024-05-15 01:09:00.869798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-05-15 01:09:00.869840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.719 [2024-05-15 01:09:00.893393] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.719 [2024-05-15 01:09:00.894036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-05-15 01:09:00.894078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.719 [2024-05-15 01:09:00.917770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.719 [2024-05-15 01:09:00.918328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-05-15 01:09:00.918356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.719 [2024-05-15 01:09:00.941107] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.719 [2024-05-15 01:09:00.941474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-05-15 01:09:00.941516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.719 [2024-05-15 01:09:00.965560] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.719 [2024-05-15 01:09:00.966048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-05-15 01:09:00.966076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.719 [2024-05-15 01:09:00.989057] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.719 [2024-05-15 01:09:00.989529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-05-15 01:09:00.989572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.719 [2024-05-15 01:09:01.011843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.719 [2024-05-15 01:09:01.012418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-05-15 01:09:01.012446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.719 [2024-05-15 01:09:01.035185] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.719 [2024-05-15 01:09:01.035639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-05-15 01:09:01.035690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.719 [2024-05-15 01:09:01.058267] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.719 [2024-05-15 01:09:01.058670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-05-15 01:09:01.058712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.719 [2024-05-15 01:09:01.082106] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.719 [2024-05-15 01:09:01.082574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-05-15 01:09:01.082602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.719 [2024-05-15 01:09:01.105758] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.719 [2024-05-15 01:09:01.106385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.719 [2024-05-15 01:09:01.106413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.978 [2024-05-15 01:09:01.130063] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.978 [2024-05-15 01:09:01.130571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.978 [2024-05-15 01:09:01.130599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.978 [2024-05-15 01:09:01.153521] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.978 [2024-05-15 01:09:01.153951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.978 [2024-05-15 01:09:01.154002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.978 [2024-05-15 01:09:01.177118] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.978 [2024-05-15 01:09:01.177705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.978 [2024-05-15 01:09:01.177733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.978 [2024-05-15 01:09:01.200530] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.978 [2024-05-15 01:09:01.201163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.978 [2024-05-15 01:09:01.201191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.978 [2024-05-15 01:09:01.223637] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.978 [2024-05-15 01:09:01.224213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.978 [2024-05-15 01:09:01.224256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.978 [2024-05-15 01:09:01.248508] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.978 [2024-05-15 01:09:01.249115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.978 [2024-05-15 01:09:01.249144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.978 [2024-05-15 01:09:01.272202] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.978 [2024-05-15 01:09:01.272905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.978 [2024-05-15 01:09:01.272952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.978 [2024-05-15 01:09:01.294836] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.978 [2024-05-15 01:09:01.295219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.978 [2024-05-15 01:09:01.295262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.978 [2024-05-15 01:09:01.318492] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.979 [2024-05-15 01:09:01.319071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.979 [2024-05-15 01:09:01.319100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.979 [2024-05-15 01:09:01.342396] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.979 [2024-05-15 01:09:01.342892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.979 [2024-05-15 01:09:01.342921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.979 [2024-05-15 01:09:01.365773] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:48.979 [2024-05-15 01:09:01.366326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.979 [2024-05-15 01:09:01.366369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.239 [2024-05-15 01:09:01.390328] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:49.239 [2024-05-15 01:09:01.390856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.239 [2024-05-15 01:09:01.390883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.239 [2024-05-15 01:09:01.413740] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:49.239 [2024-05-15 01:09:01.414162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.239 [2024-05-15 01:09:01.414207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.239 [2024-05-15 01:09:01.438080] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:49.239 [2024-05-15 01:09:01.438779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.239 [2024-05-15 01:09:01.438812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.239 [2024-05-15 01:09:01.463017] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:49.239 [2024-05-15 01:09:01.463597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.239 [2024-05-15 01:09:01.463624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.239 [2024-05-15 01:09:01.488039] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:49.239 [2024-05-15 01:09:01.488506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.239 [2024-05-15 01:09:01.488551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.239 [2024-05-15 01:09:01.512287] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:49.239 [2024-05-15 01:09:01.512776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.239 [2024-05-15 01:09:01.512818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.239 [2024-05-15 01:09:01.535955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:49.239 [2024-05-15 01:09:01.536491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.239 [2024-05-15 01:09:01.536531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.239 [2024-05-15 01:09:01.560155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:49.239 [2024-05-15 01:09:01.560873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.239 [2024-05-15 01:09:01.560900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.239 [2024-05-15 01:09:01.585387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:49.239 [2024-05-15 01:09:01.585766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.239 [2024-05-15 01:09:01.585808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.239 [2024-05-15 01:09:01.610190] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:49.239 [2024-05-15 01:09:01.610582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.239 [2024-05-15 01:09:01.610609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.499 [2024-05-15 01:09:01.633037] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:49.499 [2024-05-15 01:09:01.633662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.499 [2024-05-15 01:09:01.633689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.499 [2024-05-15 01:09:01.657249] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:49.499 [2024-05-15 01:09:01.657745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.499 [2024-05-15 01:09:01.657771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.499 [2024-05-15 01:09:01.681680] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:49.499 [2024-05-15 01:09:01.682239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.499 [2024-05-15 01:09:01.682272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.499 [2024-05-15 01:09:01.705759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:49.499 [2024-05-15 01:09:01.706349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.499 [2024-05-15 01:09:01.706381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.499 [2024-05-15 01:09:01.729113] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:49.499 [2024-05-15 01:09:01.729734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.499 [2024-05-15 01:09:01.729760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.499 [2024-05-15 01:09:01.752355] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:49.499 [2024-05-15 01:09:01.752755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.499 [2024-05-15 01:09:01.752783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.499 [2024-05-15 01:09:01.775459] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:49.499 [2024-05-15 01:09:01.775939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.499 [2024-05-15 01:09:01.775965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.499 [2024-05-15 01:09:01.800188] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:49.499 [2024-05-15 01:09:01.800731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.499 [2024-05-15 01:09:01.800758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.499 [2024-05-15 01:09:01.823877] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:49.499 [2024-05-15 01:09:01.824311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.499 [2024-05-15 01:09:01.824338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.499 [2024-05-15 01:09:01.847053] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:49.499 [2024-05-15 01:09:01.847449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.499 [2024-05-15 01:09:01.847491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.499 [2024-05-15 01:09:01.870547] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:49.499 [2024-05-15 01:09:01.871016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.499 [2024-05-15 01:09:01.871044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.758 [2024-05-15 01:09:01.894959] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1dfe540) with pdu=0x2000190fef90 00:21:49.758 [2024-05-15 01:09:01.895491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.758 [2024-05-15 01:09:01.895517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.758 00:21:49.758 Latency(us) 00:21:49.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.758 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:49.758 nvme0n1 : 2.01 1317.26 164.66 0.00 0.00 12105.16 6407.96 25437.68 00:21:49.758 =================================================================================================================== 00:21:49.758 Total : 1317.26 164.66 0.00 0.00 12105.16 6407.96 25437.68 00:21:49.758 0 00:21:49.758 01:09:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:49.758 01:09:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:49.758 01:09:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:49.758 01:09:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:49.758 | .driver_specific 00:21:49.758 | .nvme_error 00:21:49.758 | .status_code 00:21:49.758 | .command_transient_transport_error' 00:21:50.018 01:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 85 > 0 )) 00:21:50.018 01:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1339744 00:21:50.018 01:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1339744 ']' 00:21:50.018 01:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1339744 00:21:50.018 01:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:21:50.018 01:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:50.018 01:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1339744 00:21:50.018 01:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:50.018 01:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:50.018 01:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1339744' 00:21:50.018 killing process with pid 1339744 00:21:50.018 01:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1339744 00:21:50.018 Received shutdown signal, test time was about 2.000000 seconds 00:21:50.018 00:21:50.018 Latency(us) 00:21:50.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.018 =================================================================================================================== 00:21:50.018 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:50.018 01:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1339744 00:21:50.278 01:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1338122 00:21:50.278 01:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1338122 ']' 00:21:50.278 01:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1338122 00:21:50.278 01:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:21:50.278 01:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:50.278 01:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1338122 00:21:50.278 01:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:50.278 01:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:50.278 01:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1338122' 00:21:50.278 killing process with pid 1338122 00:21:50.278 01:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1338122 00:21:50.278 [2024-05-15 01:09:02.521410] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:50.278 01:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1338122 00:21:50.537 00:21:50.537 real 0m17.462s 00:21:50.537 user 0m35.681s 00:21:50.537 sys 0m4.046s 00:21:50.537 01:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:50.537 01:09:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:50.537 ************************************ 00:21:50.537 END TEST nvmf_digest_error 00:21:50.537 ************************************ 00:21:50.537 01:09:02 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:21:50.537 01:09:02 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:21:50.537 01:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:50.537 01:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:21:50.537 01:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:50.537 01:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:21:50.537 01:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:50.537 01:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:50.537 rmmod nvme_tcp 00:21:50.537 rmmod nvme_fabrics 00:21:50.537 rmmod nvme_keyring 00:21:50.537 01:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:50.537 01:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:21:50.537 01:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:21:50.537 01:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1338122 ']' 00:21:50.537 01:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1338122 00:21:50.537 01:09:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 1338122 ']' 00:21:50.537 01:09:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 1338122 00:21:50.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1338122) - No such process 00:21:50.537 01:09:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 1338122 is not found' 00:21:50.537 Process with pid 1338122 is not found 00:21:50.537 01:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:50.537 01:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:50.537 01:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:50.537 01:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:50.537 01:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:50.537 01:09:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.537 01:09:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:50.537 01:09:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.077 01:09:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:53.077 00:21:53.077 real 0m39.330s 00:21:53.077 user 1m9.496s 00:21:53.077 sys 0m10.185s 00:21:53.077 01:09:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:53.077 01:09:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:53.077 ************************************ 00:21:53.077 END TEST nvmf_digest 00:21:53.077 ************************************ 00:21:53.077 01:09:04 nvmf_tcp -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:21:53.077 01:09:04 nvmf_tcp -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:21:53.077 01:09:04 nvmf_tcp -- nvmf/nvmf.sh@119 -- # [[ phy == phy ]] 00:21:53.077 01:09:04 nvmf_tcp -- nvmf/nvmf.sh@120 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:21:53.077 01:09:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:53.077 01:09:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:53.077 01:09:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:53.077 ************************************ 00:21:53.077 START TEST nvmf_bdevperf 00:21:53.077 ************************************ 00:21:53.077 01:09:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:21:53.077 * Looking for test storage... 00:21:53.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:53.077 01:09:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:53.077 01:09:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:21:53.077 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:53.077 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:53.077 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:53.077 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:53.077 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:53.077 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:53.077 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:53.077 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:53.077 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:53.077 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:53.077 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.077 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.077 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:53.077 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:53.077 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:53.077 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:53.077 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:53.077 01:09:05 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:53.077 01:09:05 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:53.077 01:09:05 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:53.077 01:09:05 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.077 01:09:05 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.077 01:09:05 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.077 01:09:05 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:21:53.077 01:09:05 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.077 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:21:53.077 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:53.078 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:53.078 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:53.078 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:53.078 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:53.078 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:53.078 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:53.078 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:53.078 01:09:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:53.078 01:09:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:53.078 01:09:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:21:53.078 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:53.078 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:53.078 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:53.078 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:53.078 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:53.078 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.078 01:09:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:53.078 01:09:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.078 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:53.078 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:53.078 01:09:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:21:53.078 01:09:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:55.613 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:55.613 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:55.613 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:55.613 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:55.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:55.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:21:55.613 00:21:55.613 --- 10.0.0.2 ping statistics --- 00:21:55.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.613 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:55.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:55.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:21:55.613 00:21:55.613 --- 10.0.0.1 ping statistics --- 00:21:55.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.613 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:55.613 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1342785 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1342785 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 1342785 ']' 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:55.614 [2024-05-15 01:09:07.666610] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:21:55.614 [2024-05-15 01:09:07.666702] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.614 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.614 [2024-05-15 01:09:07.741979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:55.614 [2024-05-15 01:09:07.851833] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.614 [2024-05-15 01:09:07.851889] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.614 [2024-05-15 01:09:07.851917] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.614 [2024-05-15 01:09:07.851928] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.614 [2024-05-15 01:09:07.851945] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.614 [2024-05-15 01:09:07.852076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:55.614 [2024-05-15 01:09:07.852138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:55.614 [2024-05-15 01:09:07.852141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:55.614 [2024-05-15 01:09:07.989471] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.614 01:09:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:55.872 Malloc0 00:21:55.872 01:09:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.872 01:09:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:55.872 01:09:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.872 01:09:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:55.872 01:09:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.872 01:09:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:55.872 01:09:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.872 01:09:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:55.872 01:09:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.872 01:09:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:55.872 01:09:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.872 01:09:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:55.872 [2024-05-15 01:09:08.054712] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:55.872 [2024-05-15 01:09:08.055007] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:55.873 01:09:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.873 01:09:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:21:55.873 01:09:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:21:55.873 01:09:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:21:55.873 01:09:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:21:55.873 01:09:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:55.873 01:09:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:55.873 { 00:21:55.873 "params": { 00:21:55.873 "name": "Nvme$subsystem", 00:21:55.873 "trtype": "$TEST_TRANSPORT", 00:21:55.873 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.873 "adrfam": "ipv4", 00:21:55.873 "trsvcid": "$NVMF_PORT", 00:21:55.873 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.873 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.873 "hdgst": ${hdgst:-false}, 00:21:55.873 "ddgst": ${ddgst:-false} 00:21:55.873 }, 00:21:55.873 "method": "bdev_nvme_attach_controller" 00:21:55.873 } 00:21:55.873 EOF 00:21:55.873 )") 00:21:55.873 01:09:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:21:55.873 01:09:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:21:55.873 01:09:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:21:55.873 01:09:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:55.873 "params": { 00:21:55.873 "name": "Nvme1", 00:21:55.873 "trtype": "tcp", 00:21:55.873 "traddr": "10.0.0.2", 00:21:55.873 "adrfam": "ipv4", 00:21:55.873 "trsvcid": "4420", 00:21:55.873 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:55.873 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:55.873 "hdgst": false, 00:21:55.873 "ddgst": false 00:21:55.873 }, 00:21:55.873 "method": "bdev_nvme_attach_controller" 00:21:55.873 }' 00:21:55.873 [2024-05-15 01:09:08.103710] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:21:55.873 [2024-05-15 01:09:08.103786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1342915 ] 00:21:55.873 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.873 [2024-05-15 01:09:08.179056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.133 [2024-05-15 01:09:08.296874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.391 Running I/O for 1 seconds... 00:21:57.324 00:21:57.324 Latency(us) 00:21:57.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.324 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:57.324 Verification LBA range: start 0x0 length 0x4000 00:21:57.324 Nvme1n1 : 1.01 8177.85 31.94 0.00 0.00 15570.69 2512.21 20097.71 00:21:57.324 =================================================================================================================== 00:21:57.324 Total : 8177.85 31.94 0.00 0.00 15570.69 2512.21 20097.71 00:21:57.582 01:09:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1343427 00:21:57.582 01:09:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:21:57.582 01:09:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:21:57.582 01:09:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:21:57.582 01:09:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:21:57.582 01:09:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:21:57.582 01:09:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:57.582 01:09:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:57.582 { 00:21:57.582 "params": { 00:21:57.582 "name": "Nvme$subsystem", 00:21:57.582 "trtype": "$TEST_TRANSPORT", 00:21:57.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.582 "adrfam": "ipv4", 00:21:57.582 "trsvcid": "$NVMF_PORT", 00:21:57.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.582 "hdgst": ${hdgst:-false}, 00:21:57.582 "ddgst": ${ddgst:-false} 00:21:57.582 }, 00:21:57.582 "method": "bdev_nvme_attach_controller" 00:21:57.582 } 00:21:57.582 EOF 00:21:57.582 )") 00:21:57.582 01:09:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:21:57.582 01:09:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:21:57.582 01:09:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:21:57.582 01:09:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:57.582 "params": { 00:21:57.582 "name": "Nvme1", 00:21:57.582 "trtype": "tcp", 00:21:57.582 "traddr": "10.0.0.2", 00:21:57.582 "adrfam": "ipv4", 00:21:57.582 "trsvcid": "4420", 00:21:57.582 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:57.582 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:57.582 "hdgst": false, 00:21:57.582 "ddgst": false 00:21:57.582 }, 00:21:57.582 "method": "bdev_nvme_attach_controller" 00:21:57.582 }' 00:21:57.582 [2024-05-15 01:09:09.959941] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:21:57.582 [2024-05-15 01:09:09.960041] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1343427 ] 00:21:57.840 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.840 [2024-05-15 01:09:10.034361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.840 [2024-05-15 01:09:10.147699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.099 Running I/O for 15 seconds... 00:22:00.640 01:09:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1342785 00:22:00.640 01:09:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:22:00.640 [2024-05-15 01:09:12.933452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.640 [2024-05-15 01:09:12.933508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.640 [2024-05-15 01:09:12.933553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:34856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.640 [2024-05-15 01:09:12.933573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.640 [2024-05-15 01:09:12.933593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:34864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.640 [2024-05-15 01:09:12.933610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.640 [2024-05-15 01:09:12.933628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.640 [2024-05-15 01:09:12.933644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.640 [2024-05-15 01:09:12.933662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:34880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.640 [2024-05-15 01:09:12.933679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.640 [2024-05-15 01:09:12.933697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:34888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.640 [2024-05-15 01:09:12.933713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.640 [2024-05-15 01:09:12.933730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:34896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.640 [2024-05-15 01:09:12.933746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.640 [2024-05-15 01:09:12.933764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.640 [2024-05-15 01:09:12.933780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.640 [2024-05-15 01:09:12.933797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.640 [2024-05-15 01:09:12.933814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.640 [2024-05-15 01:09:12.933832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:34920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.640 [2024-05-15 01:09:12.933847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.640 [2024-05-15 01:09:12.933864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.640 [2024-05-15 01:09:12.933879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.640 [2024-05-15 01:09:12.933896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.640 [2024-05-15 01:09:12.933910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.640 [2024-05-15 01:09:12.933928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:34944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.640 [2024-05-15 01:09:12.933952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.640 [2024-05-15 01:09:12.933969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:34952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.640 [2024-05-15 01:09:12.934004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.640 [2024-05-15 01:09:12.934021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:34960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.640 [2024-05-15 01:09:12.934034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.640 [2024-05-15 01:09:12.934050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.640 [2024-05-15 01:09:12.934064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.640 [2024-05-15 01:09:12.934079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.640 [2024-05-15 01:09:12.934092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.640 [2024-05-15 01:09:12.934108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.640 [2024-05-15 01:09:12.934121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.640 [2024-05-15 01:09:12.934137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.640 [2024-05-15 01:09:12.934152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.640 [2024-05-15 01:09:12.934168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.640 [2024-05-15 01:09:12.934182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.640 [2024-05-15 01:09:12.934197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.640 [2024-05-15 01:09:12.934227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.640 [2024-05-15 01:09:12.934246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.640 [2024-05-15 01:09:12.934261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.640 [2024-05-15 01:09:12.934277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.640 [2024-05-15 01:09:12.934293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.640 [2024-05-15 01:09:12.934310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.640 [2024-05-15 01:09:12.934325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.640 [2024-05-15 01:09:12.934341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.640 [2024-05-15 01:09:12.934356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.640 [2024-05-15 01:09:12.934373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.640 [2024-05-15 01:09:12.934389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.640 [2024-05-15 01:09:12.934410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.640 [2024-05-15 01:09:12.934426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.640 [2024-05-15 01:09:12.934443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.640 [2024-05-15 01:09:12.934458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.640 [2024-05-15 01:09:12.934475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.640 [2024-05-15 01:09:12.934490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.640 [2024-05-15 01:09:12.934507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.640 [2024-05-15 01:09:12.934522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.640 [2024-05-15 01:09:12.934539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.640 [2024-05-15 01:09:12.934555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.640 [2024-05-15 01:09:12.934571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.641 [2024-05-15 01:09:12.934587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.934603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.641 [2024-05-15 01:09:12.934619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.934635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.641 [2024-05-15 01:09:12.934651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.934668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.641 [2024-05-15 01:09:12.934683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.934700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.641 [2024-05-15 01:09:12.934715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.934732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.641 [2024-05-15 01:09:12.934747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.934764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.641 [2024-05-15 01:09:12.934779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.934796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.641 [2024-05-15 01:09:12.934811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.934832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.641 [2024-05-15 01:09:12.934848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.934865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.641 [2024-05-15 01:09:12.934880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.934898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.641 [2024-05-15 01:09:12.934913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.934936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.641 [2024-05-15 01:09:12.934953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.934970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.641 [2024-05-15 01:09:12.935002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.935017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.641 [2024-05-15 01:09:12.935031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.935046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.641 [2024-05-15 01:09:12.935059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.935074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.641 [2024-05-15 01:09:12.935088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.935102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.641 [2024-05-15 01:09:12.935116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.935130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.641 [2024-05-15 01:09:12.935144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.935159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.641 [2024-05-15 01:09:12.935174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.935189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.641 [2024-05-15 01:09:12.935203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.935235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.641 [2024-05-15 01:09:12.935255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.935272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.641 [2024-05-15 01:09:12.935287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.935305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.641 [2024-05-15 01:09:12.935320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.935337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.641 [2024-05-15 01:09:12.935352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.935369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.641 [2024-05-15 01:09:12.935384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.935401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:34984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.641 [2024-05-15 01:09:12.935416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.935434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:34992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.641 [2024-05-15 01:09:12.935449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.935466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:35000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.641 [2024-05-15 01:09:12.935481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.935499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:35008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.641 [2024-05-15 01:09:12.935514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.935531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:35016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.641 [2024-05-15 01:09:12.935546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.935563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.641 [2024-05-15 01:09:12.935578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.935595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:35024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.641 [2024-05-15 01:09:12.935611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.935628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.641 [2024-05-15 01:09:12.935643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.935665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.641 [2024-05-15 01:09:12.935681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.935698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:35048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.641 [2024-05-15 01:09:12.935714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.935731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.641 [2024-05-15 01:09:12.935747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.935764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:35064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.641 [2024-05-15 01:09:12.935779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.935796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:35072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.641 [2024-05-15 01:09:12.935811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.935828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:35080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.641 [2024-05-15 01:09:12.935843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.935860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:35088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.641 [2024-05-15 01:09:12.935875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.641 [2024-05-15 01:09:12.935892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.641 [2024-05-15 01:09:12.935908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.935925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.935949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.935967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:35112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.935998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.936014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.936028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.936044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:35128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.936057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.936072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:35136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.936090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.936106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:35144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.936120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.936135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:35152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.936148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.936164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:35160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.936177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.936193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:35168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.936206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.936237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.936250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.936264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.936291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.936310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:35192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.936325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.936342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.936357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.936374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.936389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.936407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:35216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.936422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.936439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.936454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.936471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:35232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.936487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.936504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.936523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.936541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:35248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.936557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.936574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:35256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.936588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.936606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:35264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.936621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.936638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.936654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.936671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:35280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.936686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.936703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.936718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.936735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:35296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.936750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.936774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:35304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.936791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.936808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:35312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.936824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.936840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:35320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.936855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.936873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:35328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.936888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.936905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.936920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.936949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.936967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.936999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.937013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.937029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:35360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.937042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.937057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:35368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.937070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.937086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.937099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.937114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:35384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.937127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.937142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:35392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.937156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.937171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:35400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.937185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.937199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:35408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.937232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.937247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:35416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.642 [2024-05-15 01:09:12.937260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.642 [2024-05-15 01:09:12.937273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:35424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.643 [2024-05-15 01:09:12.937302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.643 [2024-05-15 01:09:12.937326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:35432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.643 [2024-05-15 01:09:12.937343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.643 [2024-05-15 01:09:12.937360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:35440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.643 [2024-05-15 01:09:12.937380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.643 [2024-05-15 01:09:12.937397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.643 [2024-05-15 01:09:12.937413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.643 [2024-05-15 01:09:12.937430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:35456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.643 [2024-05-15 01:09:12.937445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.643 [2024-05-15 01:09:12.937462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:35464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.643 [2024-05-15 01:09:12.937477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.643 [2024-05-15 01:09:12.937494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:35472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.643 [2024-05-15 01:09:12.937509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.643 [2024-05-15 01:09:12.937527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.643 [2024-05-15 01:09:12.937542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.643 [2024-05-15 01:09:12.937559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:35488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.643 [2024-05-15 01:09:12.937574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.643 [2024-05-15 01:09:12.937591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.643 [2024-05-15 01:09:12.937607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.643 [2024-05-15 01:09:12.937623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:35504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.643 [2024-05-15 01:09:12.937638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.643 [2024-05-15 01:09:12.937655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:35512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.643 [2024-05-15 01:09:12.937670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.643 [2024-05-15 01:09:12.937687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:35520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.643 [2024-05-15 01:09:12.937702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.643 [2024-05-15 01:09:12.937719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.643 [2024-05-15 01:09:12.937734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.643 [2024-05-15 01:09:12.937751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:35536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.643 [2024-05-15 01:09:12.937766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.643 [2024-05-15 01:09:12.937786] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162b770 is same with the state(5) to be set 00:22:00.643 [2024-05-15 01:09:12.937805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:00.643 [2024-05-15 01:09:12.937818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:00.643 [2024-05-15 01:09:12.937831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35544 len:8 PRP1 0x0 PRP2 0x0 00:22:00.643 [2024-05-15 01:09:12.937846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.643 [2024-05-15 01:09:12.937927] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x162b770 was disconnected and freed. reset controller. 00:22:00.643 [2024-05-15 01:09:12.938027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:00.643 [2024-05-15 01:09:12.938048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.643 [2024-05-15 01:09:12.938063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:00.643 [2024-05-15 01:09:12.938076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.643 [2024-05-15 01:09:12.938089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:00.643 [2024-05-15 01:09:12.938103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.643 [2024-05-15 01:09:12.938117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:00.643 [2024-05-15 01:09:12.938130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.643 [2024-05-15 01:09:12.938143] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:00.643 [2024-05-15 01:09:12.942037] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.643 [2024-05-15 01:09:12.942074] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:00.643 [2024-05-15 01:09:12.942786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.643 [2024-05-15 01:09:12.943030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.643 [2024-05-15 01:09:12.943056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:00.643 [2024-05-15 01:09:12.943072] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:00.643 [2024-05-15 01:09:12.943320] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:00.643 [2024-05-15 01:09:12.943574] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:00.643 [2024-05-15 01:09:12.943598] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:00.643 [2024-05-15 01:09:12.943616] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.643 [2024-05-15 01:09:12.947271] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:00.643 [2024-05-15 01:09:12.956264] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.643 [2024-05-15 01:09:12.956766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.643 [2024-05-15 01:09:12.957006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.643 [2024-05-15 01:09:12.957044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:00.643 [2024-05-15 01:09:12.957062] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:00.643 [2024-05-15 01:09:12.957305] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:00.643 [2024-05-15 01:09:12.957550] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:00.643 [2024-05-15 01:09:12.957573] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:00.643 [2024-05-15 01:09:12.957589] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.643 [2024-05-15 01:09:12.961232] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:00.643 [2024-05-15 01:09:12.970253] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.643 [2024-05-15 01:09:12.970759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.643 [2024-05-15 01:09:12.970982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.643 [2024-05-15 01:09:12.971008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:00.643 [2024-05-15 01:09:12.971024] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:00.643 [2024-05-15 01:09:12.971273] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:00.643 [2024-05-15 01:09:12.971519] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:00.643 [2024-05-15 01:09:12.971542] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:00.643 [2024-05-15 01:09:12.971557] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.643 [2024-05-15 01:09:12.975197] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:00.643 [2024-05-15 01:09:12.984215] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.643 [2024-05-15 01:09:12.984705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.643 [2024-05-15 01:09:12.984967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.643 [2024-05-15 01:09:12.984993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:00.643 [2024-05-15 01:09:12.985009] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:00.643 [2024-05-15 01:09:12.985256] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:00.643 [2024-05-15 01:09:12.985502] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:00.643 [2024-05-15 01:09:12.985525] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:00.643 [2024-05-15 01:09:12.985540] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.643 [2024-05-15 01:09:12.989186] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:00.643 [2024-05-15 01:09:12.998197] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.643 [2024-05-15 01:09:12.998653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.643 [2024-05-15 01:09:12.998877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.643 [2024-05-15 01:09:12.998903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:00.643 [2024-05-15 01:09:12.998947] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:00.644 [2024-05-15 01:09:12.999216] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:00.644 [2024-05-15 01:09:12.999462] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:00.644 [2024-05-15 01:09:12.999484] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:00.644 [2024-05-15 01:09:12.999499] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.644 [2024-05-15 01:09:13.003142] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:00.644 [2024-05-15 01:09:13.012155] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.644 [2024-05-15 01:09:13.012647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.644 [2024-05-15 01:09:13.012854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.644 [2024-05-15 01:09:13.012879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:00.644 [2024-05-15 01:09:13.012895] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:00.644 [2024-05-15 01:09:13.013137] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:00.644 [2024-05-15 01:09:13.013384] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:00.644 [2024-05-15 01:09:13.013407] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:00.644 [2024-05-15 01:09:13.013422] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.644 [2024-05-15 01:09:13.017061] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:00.644 [2024-05-15 01:09:13.026119] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.644 [2024-05-15 01:09:13.026586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.644 [2024-05-15 01:09:13.026896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.644 [2024-05-15 01:09:13.026963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:00.644 [2024-05-15 01:09:13.026981] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:00.644 [2024-05-15 01:09:13.027231] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:00.644 [2024-05-15 01:09:13.027478] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:00.644 [2024-05-15 01:09:13.027501] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:00.644 [2024-05-15 01:09:13.027516] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.906 [2024-05-15 01:09:13.031202] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:00.906 [2024-05-15 01:09:13.040246] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.906 [2024-05-15 01:09:13.040730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.906 [2024-05-15 01:09:13.040956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.906 [2024-05-15 01:09:13.040982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:00.906 [2024-05-15 01:09:13.040998] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:00.906 [2024-05-15 01:09:13.041252] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:00.906 [2024-05-15 01:09:13.041498] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:00.906 [2024-05-15 01:09:13.041521] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:00.906 [2024-05-15 01:09:13.041536] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.906 [2024-05-15 01:09:13.045180] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:00.906 [2024-05-15 01:09:13.054203] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.906 [2024-05-15 01:09:13.054635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.906 [2024-05-15 01:09:13.054851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.906 [2024-05-15 01:09:13.054877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:00.906 [2024-05-15 01:09:13.054894] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:00.906 [2024-05-15 01:09:13.055156] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:00.906 [2024-05-15 01:09:13.055405] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:00.906 [2024-05-15 01:09:13.055430] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:00.906 [2024-05-15 01:09:13.055446] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.906 [2024-05-15 01:09:13.059093] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:00.906 [2024-05-15 01:09:13.068111] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.906 [2024-05-15 01:09:13.068589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.906 [2024-05-15 01:09:13.068965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.906 [2024-05-15 01:09:13.068994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:00.906 [2024-05-15 01:09:13.069011] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:00.906 [2024-05-15 01:09:13.069252] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:00.906 [2024-05-15 01:09:13.069497] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:00.906 [2024-05-15 01:09:13.069520] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:00.906 [2024-05-15 01:09:13.069535] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.906 [2024-05-15 01:09:13.073173] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:00.906 [2024-05-15 01:09:13.082187] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.906 [2024-05-15 01:09:13.082673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.906 [2024-05-15 01:09:13.082969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.906 [2024-05-15 01:09:13.082999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:00.906 [2024-05-15 01:09:13.083016] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:00.906 [2024-05-15 01:09:13.083257] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:00.906 [2024-05-15 01:09:13.083508] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:00.906 [2024-05-15 01:09:13.083531] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:00.906 [2024-05-15 01:09:13.083546] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.906 [2024-05-15 01:09:13.087192] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:00.906 [2024-05-15 01:09:13.096211] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.906 [2024-05-15 01:09:13.096671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.906 [2024-05-15 01:09:13.096852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.906 [2024-05-15 01:09:13.096877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:00.906 [2024-05-15 01:09:13.096892] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:00.906 [2024-05-15 01:09:13.097148] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:00.906 [2024-05-15 01:09:13.097395] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:00.906 [2024-05-15 01:09:13.097418] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:00.906 [2024-05-15 01:09:13.097433] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.906 [2024-05-15 01:09:13.101077] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:00.906 [2024-05-15 01:09:13.110326] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.906 [2024-05-15 01:09:13.110780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.906 [2024-05-15 01:09:13.111124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.906 [2024-05-15 01:09:13.111156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:00.906 [2024-05-15 01:09:13.111173] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:00.906 [2024-05-15 01:09:13.111416] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:00.906 [2024-05-15 01:09:13.111662] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:00.906 [2024-05-15 01:09:13.111685] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:00.906 [2024-05-15 01:09:13.111700] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.906 [2024-05-15 01:09:13.115351] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:00.906 [2024-05-15 01:09:13.124370] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.906 [2024-05-15 01:09:13.124822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.906 [2024-05-15 01:09:13.125075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.906 [2024-05-15 01:09:13.125105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:00.906 [2024-05-15 01:09:13.125122] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:00.906 [2024-05-15 01:09:13.125364] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:00.906 [2024-05-15 01:09:13.125608] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:00.906 [2024-05-15 01:09:13.125637] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:00.906 [2024-05-15 01:09:13.125653] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.906 [2024-05-15 01:09:13.129298] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:00.906 [2024-05-15 01:09:13.138314] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.906 [2024-05-15 01:09:13.138801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.906 [2024-05-15 01:09:13.139019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.906 [2024-05-15 01:09:13.139049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:00.906 [2024-05-15 01:09:13.139066] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:00.907 [2024-05-15 01:09:13.139307] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:00.907 [2024-05-15 01:09:13.139552] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:00.907 [2024-05-15 01:09:13.139575] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:00.907 [2024-05-15 01:09:13.139590] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.907 [2024-05-15 01:09:13.143233] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:00.907 [2024-05-15 01:09:13.152270] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.907 [2024-05-15 01:09:13.152719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.907 [2024-05-15 01:09:13.152937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.907 [2024-05-15 01:09:13.152963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:00.907 [2024-05-15 01:09:13.152978] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:00.907 [2024-05-15 01:09:13.153236] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:00.907 [2024-05-15 01:09:13.153482] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:00.907 [2024-05-15 01:09:13.153505] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:00.907 [2024-05-15 01:09:13.153520] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.907 [2024-05-15 01:09:13.157166] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:00.907 [2024-05-15 01:09:13.166185] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.907 [2024-05-15 01:09:13.166653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.907 [2024-05-15 01:09:13.166922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.907 [2024-05-15 01:09:13.166957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:00.907 [2024-05-15 01:09:13.166975] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:00.907 [2024-05-15 01:09:13.167216] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:00.907 [2024-05-15 01:09:13.167461] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:00.907 [2024-05-15 01:09:13.167484] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:00.907 [2024-05-15 01:09:13.167504] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.907 [2024-05-15 01:09:13.171155] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:00.907 [2024-05-15 01:09:13.180181] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.907 [2024-05-15 01:09:13.180630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.907 [2024-05-15 01:09:13.180947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.907 [2024-05-15 01:09:13.180976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:00.907 [2024-05-15 01:09:13.180993] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:00.907 [2024-05-15 01:09:13.181236] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:00.907 [2024-05-15 01:09:13.181481] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:00.907 [2024-05-15 01:09:13.181504] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:00.907 [2024-05-15 01:09:13.181519] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.907 [2024-05-15 01:09:13.185161] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:00.907 [2024-05-15 01:09:13.194303] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.907 [2024-05-15 01:09:13.194775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.907 [2024-05-15 01:09:13.195086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.907 [2024-05-15 01:09:13.195116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:00.907 [2024-05-15 01:09:13.195133] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:00.907 [2024-05-15 01:09:13.195376] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:00.907 [2024-05-15 01:09:13.195621] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:00.907 [2024-05-15 01:09:13.195644] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:00.907 [2024-05-15 01:09:13.195659] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.907 [2024-05-15 01:09:13.199368] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:00.907 [2024-05-15 01:09:13.208297] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.907 [2024-05-15 01:09:13.208774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.907 [2024-05-15 01:09:13.209061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.907 [2024-05-15 01:09:13.209091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:00.907 [2024-05-15 01:09:13.209108] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:00.907 [2024-05-15 01:09:13.209363] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:00.907 [2024-05-15 01:09:13.209618] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:00.907 [2024-05-15 01:09:13.209642] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:00.907 [2024-05-15 01:09:13.209657] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.907 [2024-05-15 01:09:13.213343] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:00.907 [2024-05-15 01:09:13.222433] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.907 [2024-05-15 01:09:13.223021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.907 [2024-05-15 01:09:13.223263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.907 [2024-05-15 01:09:13.223291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:00.907 [2024-05-15 01:09:13.223308] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:00.907 [2024-05-15 01:09:13.223549] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:00.907 [2024-05-15 01:09:13.223795] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:00.907 [2024-05-15 01:09:13.223818] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:00.907 [2024-05-15 01:09:13.223833] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.907 [2024-05-15 01:09:13.227505] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:00.907 [2024-05-15 01:09:13.236507] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.907 [2024-05-15 01:09:13.236957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.907 [2024-05-15 01:09:13.237196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.907 [2024-05-15 01:09:13.237224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:00.907 [2024-05-15 01:09:13.237240] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:00.907 [2024-05-15 01:09:13.237481] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:00.907 [2024-05-15 01:09:13.237727] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:00.907 [2024-05-15 01:09:13.237750] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:00.907 [2024-05-15 01:09:13.237765] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.907 [2024-05-15 01:09:13.241402] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:00.907 [2024-05-15 01:09:13.250416] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.907 [2024-05-15 01:09:13.250892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.907 [2024-05-15 01:09:13.251122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.907 [2024-05-15 01:09:13.251147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:00.907 [2024-05-15 01:09:13.251162] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:00.907 [2024-05-15 01:09:13.251418] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:00.907 [2024-05-15 01:09:13.251663] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:00.907 [2024-05-15 01:09:13.251686] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:00.907 [2024-05-15 01:09:13.251702] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.907 [2024-05-15 01:09:13.255343] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:00.907 [2024-05-15 01:09:13.264366] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.907 [2024-05-15 01:09:13.264842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.907 [2024-05-15 01:09:13.265031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.907 [2024-05-15 01:09:13.265057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:00.907 [2024-05-15 01:09:13.265073] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:00.907 [2024-05-15 01:09:13.265317] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:00.907 [2024-05-15 01:09:13.265562] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:00.907 [2024-05-15 01:09:13.265585] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:00.907 [2024-05-15 01:09:13.265600] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.907 [2024-05-15 01:09:13.269246] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:00.908 [2024-05-15 01:09:13.278472] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.908 [2024-05-15 01:09:13.279040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.908 [2024-05-15 01:09:13.279383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.908 [2024-05-15 01:09:13.279437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:00.908 [2024-05-15 01:09:13.279454] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:00.908 [2024-05-15 01:09:13.279695] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:00.908 [2024-05-15 01:09:13.279949] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:00.908 [2024-05-15 01:09:13.279973] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:00.908 [2024-05-15 01:09:13.279989] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.908 [2024-05-15 01:09:13.283623] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:00.908 [2024-05-15 01:09:13.292459] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.908 [2024-05-15 01:09:13.292910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.908 [2024-05-15 01:09:13.293135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:00.908 [2024-05-15 01:09:13.293164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:00.908 [2024-05-15 01:09:13.293181] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:00.908 [2024-05-15 01:09:13.293430] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:00.908 [2024-05-15 01:09:13.293676] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:00.908 [2024-05-15 01:09:13.293699] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:00.908 [2024-05-15 01:09:13.293715] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.908 [2024-05-15 01:09:13.297387] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.168 [2024-05-15 01:09:13.306460] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.168 [2024-05-15 01:09:13.306915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.168 [2024-05-15 01:09:13.307180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.168 [2024-05-15 01:09:13.307206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.168 [2024-05-15 01:09:13.307221] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.168 [2024-05-15 01:09:13.307482] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.168 [2024-05-15 01:09:13.307728] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.168 [2024-05-15 01:09:13.307751] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.168 [2024-05-15 01:09:13.307767] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.168 [2024-05-15 01:09:13.311412] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.168 [2024-05-15 01:09:13.320426] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.168 [2024-05-15 01:09:13.320912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.168 [2024-05-15 01:09:13.321107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.168 [2024-05-15 01:09:13.321134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.168 [2024-05-15 01:09:13.321151] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.168 [2024-05-15 01:09:13.321392] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.168 [2024-05-15 01:09:13.321638] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.168 [2024-05-15 01:09:13.321660] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.168 [2024-05-15 01:09:13.321676] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.168 [2024-05-15 01:09:13.325319] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.168 [2024-05-15 01:09:13.334334] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.168 [2024-05-15 01:09:13.334814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.168 [2024-05-15 01:09:13.335028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.168 [2024-05-15 01:09:13.335054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.168 [2024-05-15 01:09:13.335069] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.168 [2024-05-15 01:09:13.335323] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.168 [2024-05-15 01:09:13.335568] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.168 [2024-05-15 01:09:13.335591] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.168 [2024-05-15 01:09:13.335606] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.168 [2024-05-15 01:09:13.339248] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.168 [2024-05-15 01:09:13.348267] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.168 [2024-05-15 01:09:13.348791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.168 [2024-05-15 01:09:13.349017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.168 [2024-05-15 01:09:13.349047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.168 [2024-05-15 01:09:13.349063] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.168 [2024-05-15 01:09:13.349322] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.168 [2024-05-15 01:09:13.349568] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.168 [2024-05-15 01:09:13.349590] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.168 [2024-05-15 01:09:13.349606] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.168 [2024-05-15 01:09:13.353248] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.168 [2024-05-15 01:09:13.362293] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.168 [2024-05-15 01:09:13.362776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.168 [2024-05-15 01:09:13.363079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.168 [2024-05-15 01:09:13.363108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.168 [2024-05-15 01:09:13.363125] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.168 [2024-05-15 01:09:13.363366] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.168 [2024-05-15 01:09:13.363611] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.168 [2024-05-15 01:09:13.363634] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.168 [2024-05-15 01:09:13.363650] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.168 [2024-05-15 01:09:13.367296] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.168 [2024-05-15 01:09:13.376333] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.168 [2024-05-15 01:09:13.376818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.168 [2024-05-15 01:09:13.377015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.168 [2024-05-15 01:09:13.377046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.168 [2024-05-15 01:09:13.377063] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.168 [2024-05-15 01:09:13.377305] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.168 [2024-05-15 01:09:13.377550] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.168 [2024-05-15 01:09:13.377573] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.168 [2024-05-15 01:09:13.377588] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.168 [2024-05-15 01:09:13.381234] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.168 [2024-05-15 01:09:13.390255] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.168 [2024-05-15 01:09:13.390701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.168 [2024-05-15 01:09:13.390913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.168 [2024-05-15 01:09:13.390949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.168 [2024-05-15 01:09:13.390981] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.168 [2024-05-15 01:09:13.391222] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.168 [2024-05-15 01:09:13.391467] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.168 [2024-05-15 01:09:13.391490] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.168 [2024-05-15 01:09:13.391505] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.168 [2024-05-15 01:09:13.395143] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.168 [2024-05-15 01:09:13.404372] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.168 [2024-05-15 01:09:13.404855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.168 [2024-05-15 01:09:13.405057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.168 [2024-05-15 01:09:13.405084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.168 [2024-05-15 01:09:13.405099] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.168 [2024-05-15 01:09:13.405342] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.168 [2024-05-15 01:09:13.405596] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.168 [2024-05-15 01:09:13.405619] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.168 [2024-05-15 01:09:13.405634] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.168 [2024-05-15 01:09:13.409275] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.168 [2024-05-15 01:09:13.418301] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.168 [2024-05-15 01:09:13.418778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.168 [2024-05-15 01:09:13.419043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.168 [2024-05-15 01:09:13.419073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.169 [2024-05-15 01:09:13.419090] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.169 [2024-05-15 01:09:13.419338] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.169 [2024-05-15 01:09:13.419584] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.169 [2024-05-15 01:09:13.419607] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.169 [2024-05-15 01:09:13.419622] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.169 [2024-05-15 01:09:13.423268] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.169 [2024-05-15 01:09:13.432283] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.169 [2024-05-15 01:09:13.432755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.169 [2024-05-15 01:09:13.432990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.169 [2024-05-15 01:09:13.433019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.169 [2024-05-15 01:09:13.433036] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.169 [2024-05-15 01:09:13.433283] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.169 [2024-05-15 01:09:13.433529] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.169 [2024-05-15 01:09:13.433552] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.169 [2024-05-15 01:09:13.433567] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.169 [2024-05-15 01:09:13.437155] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.169 [2024-05-15 01:09:13.445845] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.169 [2024-05-15 01:09:13.446284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.169 [2024-05-15 01:09:13.446449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.169 [2024-05-15 01:09:13.446473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.169 [2024-05-15 01:09:13.446487] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.169 [2024-05-15 01:09:13.446715] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.169 [2024-05-15 01:09:13.446937] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.169 [2024-05-15 01:09:13.446958] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.169 [2024-05-15 01:09:13.446971] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.169 [2024-05-15 01:09:13.450157] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.169 [2024-05-15 01:09:13.459108] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.169 [2024-05-15 01:09:13.459556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.169 [2024-05-15 01:09:13.459820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.169 [2024-05-15 01:09:13.459844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.169 [2024-05-15 01:09:13.459859] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.169 [2024-05-15 01:09:13.460112] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.169 [2024-05-15 01:09:13.460333] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.169 [2024-05-15 01:09:13.460352] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.169 [2024-05-15 01:09:13.460365] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.169 [2024-05-15 01:09:13.463441] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.169 [2024-05-15 01:09:13.472424] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.169 [2024-05-15 01:09:13.472887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.169 [2024-05-15 01:09:13.473126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.169 [2024-05-15 01:09:13.473152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.169 [2024-05-15 01:09:13.473167] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.169 [2024-05-15 01:09:13.473424] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.169 [2024-05-15 01:09:13.473630] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.169 [2024-05-15 01:09:13.473649] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.169 [2024-05-15 01:09:13.473661] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.169 [2024-05-15 01:09:13.476689] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.169 [2024-05-15 01:09:13.485680] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.169 [2024-05-15 01:09:13.486144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.169 [2024-05-15 01:09:13.486358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.169 [2024-05-15 01:09:13.486384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.169 [2024-05-15 01:09:13.486399] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.169 [2024-05-15 01:09:13.486651] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.169 [2024-05-15 01:09:13.486852] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.169 [2024-05-15 01:09:13.486871] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.169 [2024-05-15 01:09:13.486884] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.169 [2024-05-15 01:09:13.489962] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.169 [2024-05-15 01:09:13.499105] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.169 [2024-05-15 01:09:13.499585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.169 [2024-05-15 01:09:13.499815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.169 [2024-05-15 01:09:13.499840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.169 [2024-05-15 01:09:13.499854] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.169 [2024-05-15 01:09:13.500097] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.169 [2024-05-15 01:09:13.500338] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.169 [2024-05-15 01:09:13.500358] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.169 [2024-05-15 01:09:13.500370] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.169 [2024-05-15 01:09:13.503395] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.169 [2024-05-15 01:09:13.512500] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.169 [2024-05-15 01:09:13.512921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.169 [2024-05-15 01:09:13.513134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.169 [2024-05-15 01:09:13.513161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.169 [2024-05-15 01:09:13.513176] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.169 [2024-05-15 01:09:13.513434] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.169 [2024-05-15 01:09:13.513635] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.169 [2024-05-15 01:09:13.513658] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.169 [2024-05-15 01:09:13.513671] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.169 [2024-05-15 01:09:13.516740] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.169 [2024-05-15 01:09:13.525811] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.169 [2024-05-15 01:09:13.526272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.169 [2024-05-15 01:09:13.526510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.169 [2024-05-15 01:09:13.526534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.169 [2024-05-15 01:09:13.526548] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.169 [2024-05-15 01:09:13.526771] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.169 [2024-05-15 01:09:13.527032] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.169 [2024-05-15 01:09:13.527054] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.169 [2024-05-15 01:09:13.527067] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.169 [2024-05-15 01:09:13.530114] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.170 [2024-05-15 01:09:13.539193] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.170 [2024-05-15 01:09:13.539662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.170 [2024-05-15 01:09:13.539834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.170 [2024-05-15 01:09:13.539861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.170 [2024-05-15 01:09:13.539878] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.170 [2024-05-15 01:09:13.540121] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.170 [2024-05-15 01:09:13.540362] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.170 [2024-05-15 01:09:13.540383] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.170 [2024-05-15 01:09:13.540397] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.170 [2024-05-15 01:09:13.543476] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.170 [2024-05-15 01:09:13.552491] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.170 [2024-05-15 01:09:13.552969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.170 [2024-05-15 01:09:13.553137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.170 [2024-05-15 01:09:13.553162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.170 [2024-05-15 01:09:13.553177] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.170 [2024-05-15 01:09:13.553432] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.170 [2024-05-15 01:09:13.553633] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.170 [2024-05-15 01:09:13.553652] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.170 [2024-05-15 01:09:13.553669] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.170 [2024-05-15 01:09:13.556803] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.430 [2024-05-15 01:09:13.566129] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.430 [2024-05-15 01:09:13.566615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.430 [2024-05-15 01:09:13.566852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.430 [2024-05-15 01:09:13.566877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.430 [2024-05-15 01:09:13.566892] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.430 [2024-05-15 01:09:13.567131] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.430 [2024-05-15 01:09:13.567355] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.430 [2024-05-15 01:09:13.567374] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.430 [2024-05-15 01:09:13.567386] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.430 [2024-05-15 01:09:13.570457] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.430 [2024-05-15 01:09:13.579417] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.430 [2024-05-15 01:09:13.579880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.430 [2024-05-15 01:09:13.580082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.430 [2024-05-15 01:09:13.580106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.430 [2024-05-15 01:09:13.580121] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.430 [2024-05-15 01:09:13.580361] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.430 [2024-05-15 01:09:13.580562] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.430 [2024-05-15 01:09:13.580581] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.430 [2024-05-15 01:09:13.580593] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.430 [2024-05-15 01:09:13.583581] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.430 [2024-05-15 01:09:13.592736] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.430 [2024-05-15 01:09:13.593211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.430 [2024-05-15 01:09:13.593492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.430 [2024-05-15 01:09:13.593517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.430 [2024-05-15 01:09:13.593532] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.430 [2024-05-15 01:09:13.593766] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.430 [2024-05-15 01:09:13.594009] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.430 [2024-05-15 01:09:13.594031] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.430 [2024-05-15 01:09:13.594044] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.430 [2024-05-15 01:09:13.597132] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.430 [2024-05-15 01:09:13.606055] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.430 [2024-05-15 01:09:13.606541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.430 [2024-05-15 01:09:13.606731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.430 [2024-05-15 01:09:13.606755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.430 [2024-05-15 01:09:13.606769] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.430 [2024-05-15 01:09:13.607001] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.430 [2024-05-15 01:09:13.607225] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.430 [2024-05-15 01:09:13.607245] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.430 [2024-05-15 01:09:13.607257] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.430 [2024-05-15 01:09:13.610355] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.430 [2024-05-15 01:09:13.619334] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.430 [2024-05-15 01:09:13.619761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.430 [2024-05-15 01:09:13.620011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.430 [2024-05-15 01:09:13.620037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.430 [2024-05-15 01:09:13.620052] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.430 [2024-05-15 01:09:13.620290] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.430 [2024-05-15 01:09:13.620491] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.430 [2024-05-15 01:09:13.620510] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.431 [2024-05-15 01:09:13.620523] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.431 [2024-05-15 01:09:13.623587] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.431 [2024-05-15 01:09:13.632724] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.431 [2024-05-15 01:09:13.633132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.431 [2024-05-15 01:09:13.633346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.431 [2024-05-15 01:09:13.633371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.431 [2024-05-15 01:09:13.633386] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.431 [2024-05-15 01:09:13.633623] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.431 [2024-05-15 01:09:13.633845] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.431 [2024-05-15 01:09:13.633865] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.431 [2024-05-15 01:09:13.633878] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.431 [2024-05-15 01:09:13.636987] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.431 [2024-05-15 01:09:13.646029] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.431 [2024-05-15 01:09:13.646605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.431 [2024-05-15 01:09:13.646908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.431 [2024-05-15 01:09:13.646954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.431 [2024-05-15 01:09:13.646972] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.431 [2024-05-15 01:09:13.647217] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.431 [2024-05-15 01:09:13.647433] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.431 [2024-05-15 01:09:13.647453] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.431 [2024-05-15 01:09:13.647465] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.431 [2024-05-15 01:09:13.650562] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.431 [2024-05-15 01:09:13.659353] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.431 [2024-05-15 01:09:13.659818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.431 [2024-05-15 01:09:13.660048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.431 [2024-05-15 01:09:13.660074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.431 [2024-05-15 01:09:13.660104] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.431 [2024-05-15 01:09:13.660341] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.431 [2024-05-15 01:09:13.660542] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.431 [2024-05-15 01:09:13.660561] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.431 [2024-05-15 01:09:13.660573] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.431 [2024-05-15 01:09:13.663667] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.431 [2024-05-15 01:09:13.672742] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.431 [2024-05-15 01:09:13.673198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.431 [2024-05-15 01:09:13.673357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.431 [2024-05-15 01:09:13.673383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.431 [2024-05-15 01:09:13.673398] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.431 [2024-05-15 01:09:13.673653] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.431 [2024-05-15 01:09:13.673854] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.431 [2024-05-15 01:09:13.673873] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.431 [2024-05-15 01:09:13.673885] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.431 [2024-05-15 01:09:13.676936] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.431 [2024-05-15 01:09:13.686109] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.431 [2024-05-15 01:09:13.686599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.431 [2024-05-15 01:09:13.686811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.431 [2024-05-15 01:09:13.686836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.431 [2024-05-15 01:09:13.686851] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.431 [2024-05-15 01:09:13.687133] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.431 [2024-05-15 01:09:13.687355] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.431 [2024-05-15 01:09:13.687375] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.431 [2024-05-15 01:09:13.687387] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.431 [2024-05-15 01:09:13.690568] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.431 [2024-05-15 01:09:13.699625] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.431 [2024-05-15 01:09:13.700075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.431 [2024-05-15 01:09:13.700320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.431 [2024-05-15 01:09:13.700344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.431 [2024-05-15 01:09:13.700358] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.431 [2024-05-15 01:09:13.700582] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.431 [2024-05-15 01:09:13.700799] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.431 [2024-05-15 01:09:13.700821] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.431 [2024-05-15 01:09:13.700833] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.431 [2024-05-15 01:09:13.703967] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.431 [2024-05-15 01:09:13.712953] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.431 [2024-05-15 01:09:13.713476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.431 [2024-05-15 01:09:13.713705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.431 [2024-05-15 01:09:13.713730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.431 [2024-05-15 01:09:13.713745] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.431 [2024-05-15 01:09:13.713997] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.431 [2024-05-15 01:09:13.714227] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.431 [2024-05-15 01:09:13.714247] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.431 [2024-05-15 01:09:13.714260] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.431 [2024-05-15 01:09:13.717340] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.431 [2024-05-15 01:09:13.726325] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.431 [2024-05-15 01:09:13.726745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.431 [2024-05-15 01:09:13.727014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.431 [2024-05-15 01:09:13.727045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.431 [2024-05-15 01:09:13.727061] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.431 [2024-05-15 01:09:13.727291] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.431 [2024-05-15 01:09:13.727508] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.431 [2024-05-15 01:09:13.727527] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.431 [2024-05-15 01:09:13.727539] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.431 [2024-05-15 01:09:13.730566] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.431 [2024-05-15 01:09:13.739807] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.431 [2024-05-15 01:09:13.740306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.431 [2024-05-15 01:09:13.740653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.431 [2024-05-15 01:09:13.740677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.431 [2024-05-15 01:09:13.740692] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.431 [2024-05-15 01:09:13.740914] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.431 [2024-05-15 01:09:13.741165] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.431 [2024-05-15 01:09:13.741186] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.431 [2024-05-15 01:09:13.741199] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.431 [2024-05-15 01:09:13.744270] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.431 [2024-05-15 01:09:13.753189] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.431 [2024-05-15 01:09:13.753674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.431 [2024-05-15 01:09:13.753897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.431 [2024-05-15 01:09:13.753946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.431 [2024-05-15 01:09:13.753963] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.432 [2024-05-15 01:09:13.754193] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.432 [2024-05-15 01:09:13.754428] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.432 [2024-05-15 01:09:13.754448] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.432 [2024-05-15 01:09:13.754460] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.432 [2024-05-15 01:09:13.757486] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.432 [2024-05-15 01:09:13.766550] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.432 [2024-05-15 01:09:13.767003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.432 [2024-05-15 01:09:13.767204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.432 [2024-05-15 01:09:13.767230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.432 [2024-05-15 01:09:13.767250] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.432 [2024-05-15 01:09:13.767499] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.432 [2024-05-15 01:09:13.767715] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.432 [2024-05-15 01:09:13.767734] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.432 [2024-05-15 01:09:13.767746] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.432 [2024-05-15 01:09:13.770788] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.432 [2024-05-15 01:09:13.779840] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.432 [2024-05-15 01:09:13.780333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.432 [2024-05-15 01:09:13.780633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.432 [2024-05-15 01:09:13.780658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.432 [2024-05-15 01:09:13.780673] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.432 [2024-05-15 01:09:13.780928] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.432 [2024-05-15 01:09:13.781164] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.432 [2024-05-15 01:09:13.781185] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.432 [2024-05-15 01:09:13.781198] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.432 [2024-05-15 01:09:13.784226] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.432 [2024-05-15 01:09:13.793156] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.432 [2024-05-15 01:09:13.793640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.432 [2024-05-15 01:09:13.793848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.432 [2024-05-15 01:09:13.793875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.432 [2024-05-15 01:09:13.793890] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.432 [2024-05-15 01:09:13.794158] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.432 [2024-05-15 01:09:13.794398] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.432 [2024-05-15 01:09:13.794418] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.432 [2024-05-15 01:09:13.794430] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.432 [2024-05-15 01:09:13.797458] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.432 [2024-05-15 01:09:13.806402] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.432 [2024-05-15 01:09:13.806879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.432 [2024-05-15 01:09:13.807100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.432 [2024-05-15 01:09:13.807125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.432 [2024-05-15 01:09:13.807140] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.432 [2024-05-15 01:09:13.807370] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.432 [2024-05-15 01:09:13.807571] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.432 [2024-05-15 01:09:13.807590] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.432 [2024-05-15 01:09:13.807602] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.432 [2024-05-15 01:09:13.810632] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.432 [2024-05-15 01:09:13.819867] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.432 [2024-05-15 01:09:13.820309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.432 [2024-05-15 01:09:13.820494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.432 [2024-05-15 01:09:13.820519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.432 [2024-05-15 01:09:13.820535] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.432 [2024-05-15 01:09:13.820774] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.432 [2024-05-15 01:09:13.821008] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.432 [2024-05-15 01:09:13.821029] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.432 [2024-05-15 01:09:13.821042] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.692 [2024-05-15 01:09:13.824325] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.692 [2024-05-15 01:09:13.833131] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.692 [2024-05-15 01:09:13.833604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.692 [2024-05-15 01:09:13.833829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.692 [2024-05-15 01:09:13.833854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.692 [2024-05-15 01:09:13.833869] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.692 [2024-05-15 01:09:13.834096] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.692 [2024-05-15 01:09:13.834340] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.692 [2024-05-15 01:09:13.834360] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.692 [2024-05-15 01:09:13.834372] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.692 [2024-05-15 01:09:13.837443] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.692 [2024-05-15 01:09:13.846424] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.692 [2024-05-15 01:09:13.846890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.692 [2024-05-15 01:09:13.847128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.692 [2024-05-15 01:09:13.847154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.692 [2024-05-15 01:09:13.847169] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.692 [2024-05-15 01:09:13.847426] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.692 [2024-05-15 01:09:13.847633] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.692 [2024-05-15 01:09:13.847652] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.692 [2024-05-15 01:09:13.847664] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.692 [2024-05-15 01:09:13.850749] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.692 [2024-05-15 01:09:13.859661] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.692 [2024-05-15 01:09:13.860117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.692 [2024-05-15 01:09:13.860315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.692 [2024-05-15 01:09:13.860341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.692 [2024-05-15 01:09:13.860355] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.692 [2024-05-15 01:09:13.860593] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.692 [2024-05-15 01:09:13.860794] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.692 [2024-05-15 01:09:13.860813] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.692 [2024-05-15 01:09:13.860825] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.692 [2024-05-15 01:09:13.863896] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.692 [2024-05-15 01:09:13.873116] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.692 [2024-05-15 01:09:13.873597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.692 [2024-05-15 01:09:13.873923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.692 [2024-05-15 01:09:13.873969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.692 [2024-05-15 01:09:13.873985] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.692 [2024-05-15 01:09:13.874216] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.692 [2024-05-15 01:09:13.874452] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.692 [2024-05-15 01:09:13.874472] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.692 [2024-05-15 01:09:13.874484] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.692 [2024-05-15 01:09:13.877568] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.692 [2024-05-15 01:09:13.886526] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.692 [2024-05-15 01:09:13.886986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.692 [2024-05-15 01:09:13.887490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.692 [2024-05-15 01:09:13.887542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.692 [2024-05-15 01:09:13.887559] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.693 [2024-05-15 01:09:13.887795] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.693 [2024-05-15 01:09:13.888042] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.693 [2024-05-15 01:09:13.888069] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.693 [2024-05-15 01:09:13.888083] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.693 [2024-05-15 01:09:13.891142] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.693 [2024-05-15 01:09:13.899833] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.693 [2024-05-15 01:09:13.900279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.693 [2024-05-15 01:09:13.900491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.693 [2024-05-15 01:09:13.900516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.693 [2024-05-15 01:09:13.900531] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.693 [2024-05-15 01:09:13.900762] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.693 [2024-05-15 01:09:13.900994] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.693 [2024-05-15 01:09:13.901015] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.693 [2024-05-15 01:09:13.901028] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.693 [2024-05-15 01:09:13.904079] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.693 [2024-05-15 01:09:13.913383] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.693 [2024-05-15 01:09:13.913817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.693 [2024-05-15 01:09:13.914083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.693 [2024-05-15 01:09:13.914111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.693 [2024-05-15 01:09:13.914126] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.693 [2024-05-15 01:09:13.914367] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.693 [2024-05-15 01:09:13.914569] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.693 [2024-05-15 01:09:13.914588] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.693 [2024-05-15 01:09:13.914601] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.693 [2024-05-15 01:09:13.917636] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.693 [2024-05-15 01:09:13.926806] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.693 [2024-05-15 01:09:13.927300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.693 [2024-05-15 01:09:13.927483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.693 [2024-05-15 01:09:13.927508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.693 [2024-05-15 01:09:13.927523] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.693 [2024-05-15 01:09:13.927767] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.693 [2024-05-15 01:09:13.928011] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.693 [2024-05-15 01:09:13.928031] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.693 [2024-05-15 01:09:13.928049] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.693 [2024-05-15 01:09:13.931122] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.693 [2024-05-15 01:09:13.940157] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.693 [2024-05-15 01:09:13.940568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.693 [2024-05-15 01:09:13.940796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.693 [2024-05-15 01:09:13.940821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.693 [2024-05-15 01:09:13.940836] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.693 [2024-05-15 01:09:13.941114] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.693 [2024-05-15 01:09:13.941382] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.693 [2024-05-15 01:09:13.941402] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.693 [2024-05-15 01:09:13.941414] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.693 [2024-05-15 01:09:13.944724] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.693 [2024-05-15 01:09:13.953435] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.693 [2024-05-15 01:09:13.953838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.693 [2024-05-15 01:09:13.954037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.693 [2024-05-15 01:09:13.954062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.693 [2024-05-15 01:09:13.954076] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.693 [2024-05-15 01:09:13.954294] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.693 [2024-05-15 01:09:13.954496] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.693 [2024-05-15 01:09:13.954515] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.693 [2024-05-15 01:09:13.954527] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.693 [2024-05-15 01:09:13.957600] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.693 [2024-05-15 01:09:13.966785] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.693 [2024-05-15 01:09:13.967249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.693 [2024-05-15 01:09:13.967473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.693 [2024-05-15 01:09:13.967498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.693 [2024-05-15 01:09:13.967514] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.693 [2024-05-15 01:09:13.967758] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.693 [2024-05-15 01:09:13.968015] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.693 [2024-05-15 01:09:13.968037] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.693 [2024-05-15 01:09:13.968051] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.693 [2024-05-15 01:09:13.971249] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.693 [2024-05-15 01:09:13.980182] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.693 [2024-05-15 01:09:13.980588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.693 [2024-05-15 01:09:13.980785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.693 [2024-05-15 01:09:13.980810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.693 [2024-05-15 01:09:13.980826] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.693 [2024-05-15 01:09:13.981052] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.693 [2024-05-15 01:09:13.981303] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.693 [2024-05-15 01:09:13.981322] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.693 [2024-05-15 01:09:13.981334] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.693 [2024-05-15 01:09:13.984463] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.693 [2024-05-15 01:09:13.993544] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.693 [2024-05-15 01:09:13.994009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.693 [2024-05-15 01:09:13.994211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.693 [2024-05-15 01:09:13.994236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.693 [2024-05-15 01:09:13.994251] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.693 [2024-05-15 01:09:13.994504] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.693 [2024-05-15 01:09:13.994706] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.693 [2024-05-15 01:09:13.994724] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.693 [2024-05-15 01:09:13.994737] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.693 [2024-05-15 01:09:13.997808] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.693 [2024-05-15 01:09:14.006780] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.693 [2024-05-15 01:09:14.007233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.693 [2024-05-15 01:09:14.007501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.693 [2024-05-15 01:09:14.007526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.693 [2024-05-15 01:09:14.007540] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.693 [2024-05-15 01:09:14.007738] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.693 [2024-05-15 01:09:14.007965] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.693 [2024-05-15 01:09:14.007994] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.693 [2024-05-15 01:09:14.008006] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.693 [2024-05-15 01:09:14.011056] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.693 [2024-05-15 01:09:14.020257] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.693 [2024-05-15 01:09:14.020641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.693 [2024-05-15 01:09:14.020802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.693 [2024-05-15 01:09:14.020825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.693 [2024-05-15 01:09:14.020840] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.693 [2024-05-15 01:09:14.021087] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.693 [2024-05-15 01:09:14.021329] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.693 [2024-05-15 01:09:14.021350] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.693 [2024-05-15 01:09:14.021362] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.693 [2024-05-15 01:09:14.024450] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.693 [2024-05-15 01:09:14.033538] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.693 [2024-05-15 01:09:14.033970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.693 [2024-05-15 01:09:14.034206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.693 [2024-05-15 01:09:14.034232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.693 [2024-05-15 01:09:14.034247] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.693 [2024-05-15 01:09:14.034488] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.693 [2024-05-15 01:09:14.034690] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.693 [2024-05-15 01:09:14.034709] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.693 [2024-05-15 01:09:14.034722] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.693 [2024-05-15 01:09:14.037778] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.693 [2024-05-15 01:09:14.046786] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.693 [2024-05-15 01:09:14.047237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.693 [2024-05-15 01:09:14.047478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.693 [2024-05-15 01:09:14.047503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.693 [2024-05-15 01:09:14.047518] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.693 [2024-05-15 01:09:14.047758] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.693 [2024-05-15 01:09:14.047983] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.693 [2024-05-15 01:09:14.048004] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.693 [2024-05-15 01:09:14.048017] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.693 [2024-05-15 01:09:14.051070] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.693 [2024-05-15 01:09:14.060260] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.693 [2024-05-15 01:09:14.060742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.693 [2024-05-15 01:09:14.060981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.693 [2024-05-15 01:09:14.061008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.693 [2024-05-15 01:09:14.061024] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.693 [2024-05-15 01:09:14.061260] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.693 [2024-05-15 01:09:14.061480] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.693 [2024-05-15 01:09:14.061499] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.693 [2024-05-15 01:09:14.061512] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.693 [2024-05-15 01:09:14.064539] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.693 [2024-05-15 01:09:14.073692] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.693 [2024-05-15 01:09:14.074103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.693 [2024-05-15 01:09:14.074322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.693 [2024-05-15 01:09:14.074346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.693 [2024-05-15 01:09:14.074361] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.693 [2024-05-15 01:09:14.074584] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.694 [2024-05-15 01:09:14.074801] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.694 [2024-05-15 01:09:14.074820] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.694 [2024-05-15 01:09:14.074832] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.694 [2024-05-15 01:09:14.077885] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.961 [2024-05-15 01:09:14.087044] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.961 [2024-05-15 01:09:14.087529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.961 [2024-05-15 01:09:14.087880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.961 [2024-05-15 01:09:14.087905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.961 [2024-05-15 01:09:14.087920] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.961 [2024-05-15 01:09:14.088162] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.961 [2024-05-15 01:09:14.088414] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.961 [2024-05-15 01:09:14.088435] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.961 [2024-05-15 01:09:14.088449] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.961 [2024-05-15 01:09:14.091584] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.961 [2024-05-15 01:09:14.100367] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.961 [2024-05-15 01:09:14.100745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.961 [2024-05-15 01:09:14.100923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.961 [2024-05-15 01:09:14.100963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.961 [2024-05-15 01:09:14.100994] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.962 [2024-05-15 01:09:14.101238] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.962 [2024-05-15 01:09:14.101456] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.962 [2024-05-15 01:09:14.101475] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.962 [2024-05-15 01:09:14.101487] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.962 [2024-05-15 01:09:14.104517] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.962 [2024-05-15 01:09:14.113631] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.962 [2024-05-15 01:09:14.114165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.962 [2024-05-15 01:09:14.114344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.962 [2024-05-15 01:09:14.114368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.962 [2024-05-15 01:09:14.114383] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.962 [2024-05-15 01:09:14.114622] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.962 [2024-05-15 01:09:14.114825] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.962 [2024-05-15 01:09:14.114844] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.962 [2024-05-15 01:09:14.114856] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.962 [2024-05-15 01:09:14.117953] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.962 [2024-05-15 01:09:14.126995] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.962 [2024-05-15 01:09:14.127641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.962 [2024-05-15 01:09:14.127972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.962 [2024-05-15 01:09:14.128000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.962 [2024-05-15 01:09:14.128015] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.962 [2024-05-15 01:09:14.128250] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.962 [2024-05-15 01:09:14.128454] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.962 [2024-05-15 01:09:14.128473] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.962 [2024-05-15 01:09:14.128485] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.962 [2024-05-15 01:09:14.131555] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.962 [2024-05-15 01:09:14.140407] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.962 [2024-05-15 01:09:14.140815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.962 [2024-05-15 01:09:14.141069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.962 [2024-05-15 01:09:14.141096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.962 [2024-05-15 01:09:14.141118] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.962 [2024-05-15 01:09:14.141357] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.962 [2024-05-15 01:09:14.141559] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.962 [2024-05-15 01:09:14.141578] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.962 [2024-05-15 01:09:14.141590] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.962 [2024-05-15 01:09:14.144656] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.962 [2024-05-15 01:09:14.153652] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.962 [2024-05-15 01:09:14.154122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.962 [2024-05-15 01:09:14.154338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.962 [2024-05-15 01:09:14.154364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.962 [2024-05-15 01:09:14.154379] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.962 [2024-05-15 01:09:14.154636] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.962 [2024-05-15 01:09:14.154838] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.962 [2024-05-15 01:09:14.154857] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.962 [2024-05-15 01:09:14.154869] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.962 [2024-05-15 01:09:14.157899] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.962 [2024-05-15 01:09:14.166975] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.962 [2024-05-15 01:09:14.167376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.962 [2024-05-15 01:09:14.167600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.962 [2024-05-15 01:09:14.167624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.962 [2024-05-15 01:09:14.167639] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.962 [2024-05-15 01:09:14.167875] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.962 [2024-05-15 01:09:14.168131] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.962 [2024-05-15 01:09:14.168153] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.962 [2024-05-15 01:09:14.168166] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.962 [2024-05-15 01:09:14.171235] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.962 [2024-05-15 01:09:14.180317] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.962 [2024-05-15 01:09:14.180765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.962 [2024-05-15 01:09:14.180979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.962 [2024-05-15 01:09:14.181006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.962 [2024-05-15 01:09:14.181022] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.962 [2024-05-15 01:09:14.181284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.962 [2024-05-15 01:09:14.181530] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.962 [2024-05-15 01:09:14.181553] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.962 [2024-05-15 01:09:14.181568] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.962 [2024-05-15 01:09:14.185149] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.962 [2024-05-15 01:09:14.194275] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.962 [2024-05-15 01:09:14.194820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.962 [2024-05-15 01:09:14.195041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.962 [2024-05-15 01:09:14.195068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.962 [2024-05-15 01:09:14.195083] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.962 [2024-05-15 01:09:14.195328] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.962 [2024-05-15 01:09:14.195574] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.962 [2024-05-15 01:09:14.195597] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.962 [2024-05-15 01:09:14.195612] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.962 [2024-05-15 01:09:14.199306] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.962 [2024-05-15 01:09:14.208385] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.962 [2024-05-15 01:09:14.208842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.962 [2024-05-15 01:09:14.209110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.962 [2024-05-15 01:09:14.209137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.962 [2024-05-15 01:09:14.209152] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.962 [2024-05-15 01:09:14.209410] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.962 [2024-05-15 01:09:14.209662] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.962 [2024-05-15 01:09:14.209686] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.962 [2024-05-15 01:09:14.209701] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.962 [2024-05-15 01:09:14.213372] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.962 [2024-05-15 01:09:14.222389] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.962 [2024-05-15 01:09:14.222849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.962 [2024-05-15 01:09:14.223056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.962 [2024-05-15 01:09:14.223083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.962 [2024-05-15 01:09:14.223098] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.962 [2024-05-15 01:09:14.223354] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.962 [2024-05-15 01:09:14.223609] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.962 [2024-05-15 01:09:14.223633] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.962 [2024-05-15 01:09:14.223648] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.962 [2024-05-15 01:09:14.227293] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.962 [2024-05-15 01:09:14.236308] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.962 [2024-05-15 01:09:14.236734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.963 [2024-05-15 01:09:14.236983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.963 [2024-05-15 01:09:14.237009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.963 [2024-05-15 01:09:14.237024] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.963 [2024-05-15 01:09:14.237287] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.963 [2024-05-15 01:09:14.237534] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.963 [2024-05-15 01:09:14.237557] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.963 [2024-05-15 01:09:14.237572] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.963 [2024-05-15 01:09:14.241219] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.963 [2024-05-15 01:09:14.250237] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.963 [2024-05-15 01:09:14.250728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.963 [2024-05-15 01:09:14.250925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.963 [2024-05-15 01:09:14.250974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.963 [2024-05-15 01:09:14.250990] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.963 [2024-05-15 01:09:14.251236] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.963 [2024-05-15 01:09:14.251482] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.963 [2024-05-15 01:09:14.251505] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.963 [2024-05-15 01:09:14.251520] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.963 [2024-05-15 01:09:14.255161] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.963 [2024-05-15 01:09:14.264175] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.963 [2024-05-15 01:09:14.264662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.963 [2024-05-15 01:09:14.264902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.963 [2024-05-15 01:09:14.264941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.963 [2024-05-15 01:09:14.264966] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.963 [2024-05-15 01:09:14.265208] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.963 [2024-05-15 01:09:14.265453] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.963 [2024-05-15 01:09:14.265482] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.963 [2024-05-15 01:09:14.265498] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.963 [2024-05-15 01:09:14.269144] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.963 [2024-05-15 01:09:14.278165] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.963 [2024-05-15 01:09:14.278653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.963 [2024-05-15 01:09:14.278846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.963 [2024-05-15 01:09:14.278886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.963 [2024-05-15 01:09:14.278900] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.963 [2024-05-15 01:09:14.279170] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.963 [2024-05-15 01:09:14.279417] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.963 [2024-05-15 01:09:14.279440] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.963 [2024-05-15 01:09:14.279454] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.963 [2024-05-15 01:09:14.283095] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.963 [2024-05-15 01:09:14.292113] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.963 [2024-05-15 01:09:14.292593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.963 [2024-05-15 01:09:14.292773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.963 [2024-05-15 01:09:14.292801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.963 [2024-05-15 01:09:14.292817] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.963 [2024-05-15 01:09:14.293071] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.963 [2024-05-15 01:09:14.293317] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.963 [2024-05-15 01:09:14.293340] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.963 [2024-05-15 01:09:14.293355] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.963 [2024-05-15 01:09:14.296996] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.963 [2024-05-15 01:09:14.306246] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.963 [2024-05-15 01:09:14.306909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.963 [2024-05-15 01:09:14.307181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.963 [2024-05-15 01:09:14.307212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.963 [2024-05-15 01:09:14.307230] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.963 [2024-05-15 01:09:14.307471] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.963 [2024-05-15 01:09:14.307716] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.963 [2024-05-15 01:09:14.307739] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.963 [2024-05-15 01:09:14.307760] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.963 [2024-05-15 01:09:14.311403] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.963 [2024-05-15 01:09:14.320207] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.963 [2024-05-15 01:09:14.320739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.963 [2024-05-15 01:09:14.321029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.963 [2024-05-15 01:09:14.321079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.963 [2024-05-15 01:09:14.321096] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.963 [2024-05-15 01:09:14.321337] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.963 [2024-05-15 01:09:14.321583] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.963 [2024-05-15 01:09:14.321606] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.963 [2024-05-15 01:09:14.321620] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.963 [2024-05-15 01:09:14.325265] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.963 [2024-05-15 01:09:14.334283] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.963 [2024-05-15 01:09:14.334755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.963 [2024-05-15 01:09:14.334967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.963 [2024-05-15 01:09:14.334996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:01.963 [2024-05-15 01:09:14.335013] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:01.963 [2024-05-15 01:09:14.335255] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:01.963 [2024-05-15 01:09:14.335499] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.963 [2024-05-15 01:09:14.335522] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.963 [2024-05-15 01:09:14.335537] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.963 [2024-05-15 01:09:14.339180] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.240 [2024-05-15 01:09:14.349090] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.240 [2024-05-15 01:09:14.349655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.240 [2024-05-15 01:09:14.349921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.240 [2024-05-15 01:09:14.349976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.240 [2024-05-15 01:09:14.350009] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.240 [2024-05-15 01:09:14.350308] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.240 [2024-05-15 01:09:14.350562] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.240 [2024-05-15 01:09:14.350587] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.240 [2024-05-15 01:09:14.350603] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.240 [2024-05-15 01:09:14.354436] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.240 [2024-05-15 01:09:14.363358] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.240 [2024-05-15 01:09:14.363859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.240 [2024-05-15 01:09:14.364057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.241 [2024-05-15 01:09:14.364084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.241 [2024-05-15 01:09:14.364100] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.241 [2024-05-15 01:09:14.364360] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.241 [2024-05-15 01:09:14.364628] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.241 [2024-05-15 01:09:14.364655] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.241 [2024-05-15 01:09:14.364671] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.241 [2024-05-15 01:09:14.368535] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.241 [2024-05-15 01:09:14.377648] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.241 [2024-05-15 01:09:14.378157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.241 [2024-05-15 01:09:14.378524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.241 [2024-05-15 01:09:14.378576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.241 [2024-05-15 01:09:14.378595] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.241 [2024-05-15 01:09:14.378851] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.241 [2024-05-15 01:09:14.379209] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.241 [2024-05-15 01:09:14.379235] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.241 [2024-05-15 01:09:14.379250] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.241 [2024-05-15 01:09:14.383130] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.241 [2024-05-15 01:09:14.391817] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.241 [2024-05-15 01:09:14.392289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.241 [2024-05-15 01:09:14.392491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.241 [2024-05-15 01:09:14.392519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.241 [2024-05-15 01:09:14.392536] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.241 [2024-05-15 01:09:14.392778] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.241 [2024-05-15 01:09:14.393034] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.241 [2024-05-15 01:09:14.393059] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.241 [2024-05-15 01:09:14.393074] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.241 [2024-05-15 01:09:14.396708] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.241 [2024-05-15 01:09:14.405734] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.241 [2024-05-15 01:09:14.406192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.241 [2024-05-15 01:09:14.406400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.241 [2024-05-15 01:09:14.406430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.241 [2024-05-15 01:09:14.406447] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.241 [2024-05-15 01:09:14.406688] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.241 [2024-05-15 01:09:14.406947] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.241 [2024-05-15 01:09:14.406971] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.241 [2024-05-15 01:09:14.406986] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.241 [2024-05-15 01:09:14.410624] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.241 [2024-05-15 01:09:14.419640] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.241 [2024-05-15 01:09:14.420123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.241 [2024-05-15 01:09:14.420354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.241 [2024-05-15 01:09:14.420406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.241 [2024-05-15 01:09:14.420423] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.241 [2024-05-15 01:09:14.420665] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.241 [2024-05-15 01:09:14.420910] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.241 [2024-05-15 01:09:14.420941] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.241 [2024-05-15 01:09:14.420958] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.241 [2024-05-15 01:09:14.424594] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.241 [2024-05-15 01:09:14.433604] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.241 [2024-05-15 01:09:14.434089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.241 [2024-05-15 01:09:14.434304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.241 [2024-05-15 01:09:14.434330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.241 [2024-05-15 01:09:14.434345] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.241 [2024-05-15 01:09:14.434589] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.241 [2024-05-15 01:09:14.434835] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.241 [2024-05-15 01:09:14.434857] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.241 [2024-05-15 01:09:14.434872] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.241 [2024-05-15 01:09:14.438515] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.241 [2024-05-15 01:09:14.447582] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.241 [2024-05-15 01:09:14.448040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.241 [2024-05-15 01:09:14.448220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.241 [2024-05-15 01:09:14.448250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.241 [2024-05-15 01:09:14.448267] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.241 [2024-05-15 01:09:14.448509] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.241 [2024-05-15 01:09:14.448754] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.241 [2024-05-15 01:09:14.448777] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.241 [2024-05-15 01:09:14.448792] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.241 [2024-05-15 01:09:14.452476] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.241 [2024-05-15 01:09:14.461578] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.241 [2024-05-15 01:09:14.462057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.241 [2024-05-15 01:09:14.462263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.241 [2024-05-15 01:09:14.462291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.241 [2024-05-15 01:09:14.462308] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.241 [2024-05-15 01:09:14.462550] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.241 [2024-05-15 01:09:14.462795] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.241 [2024-05-15 01:09:14.462818] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.241 [2024-05-15 01:09:14.462833] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.241 [2024-05-15 01:09:14.466495] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.241 [2024-05-15 01:09:14.475516] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.241 [2024-05-15 01:09:14.475960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.241 [2024-05-15 01:09:14.476192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.241 [2024-05-15 01:09:14.476217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.241 [2024-05-15 01:09:14.476249] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.241 [2024-05-15 01:09:14.476490] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.241 [2024-05-15 01:09:14.476735] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.241 [2024-05-15 01:09:14.476758] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.241 [2024-05-15 01:09:14.476773] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.241 [2024-05-15 01:09:14.480417] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.241 [2024-05-15 01:09:14.489425] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.241 [2024-05-15 01:09:14.489880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.241 [2024-05-15 01:09:14.490106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.241 [2024-05-15 01:09:14.490140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.242 [2024-05-15 01:09:14.490158] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.242 [2024-05-15 01:09:14.490400] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.242 [2024-05-15 01:09:14.490644] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.242 [2024-05-15 01:09:14.490668] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.242 [2024-05-15 01:09:14.490683] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.242 [2024-05-15 01:09:14.494328] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.242 [2024-05-15 01:09:14.503344] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.242 [2024-05-15 01:09:14.503881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.242 [2024-05-15 01:09:14.504073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.242 [2024-05-15 01:09:14.504102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.242 [2024-05-15 01:09:14.504120] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.242 [2024-05-15 01:09:14.504361] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.242 [2024-05-15 01:09:14.504606] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.242 [2024-05-15 01:09:14.504629] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.242 [2024-05-15 01:09:14.504644] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.242 [2024-05-15 01:09:14.508285] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.242 [2024-05-15 01:09:14.517297] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.242 [2024-05-15 01:09:14.517746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.242 [2024-05-15 01:09:14.517961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.242 [2024-05-15 01:09:14.517992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.242 [2024-05-15 01:09:14.518011] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.242 [2024-05-15 01:09:14.518253] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.242 [2024-05-15 01:09:14.518499] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.242 [2024-05-15 01:09:14.518524] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.242 [2024-05-15 01:09:14.518541] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.242 [2024-05-15 01:09:14.522185] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.242 [2024-05-15 01:09:14.531205] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.242 [2024-05-15 01:09:14.531659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.242 [2024-05-15 01:09:14.531879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.242 [2024-05-15 01:09:14.531906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.242 [2024-05-15 01:09:14.531939] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.242 [2024-05-15 01:09:14.532184] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.242 [2024-05-15 01:09:14.532429] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.242 [2024-05-15 01:09:14.532452] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.242 [2024-05-15 01:09:14.532467] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.242 [2024-05-15 01:09:14.536108] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.242 [2024-05-15 01:09:14.545116] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.242 [2024-05-15 01:09:14.545683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.242 [2024-05-15 01:09:14.545918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.242 [2024-05-15 01:09:14.545955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.242 [2024-05-15 01:09:14.545973] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.242 [2024-05-15 01:09:14.546214] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.242 [2024-05-15 01:09:14.546460] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.242 [2024-05-15 01:09:14.546482] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.242 [2024-05-15 01:09:14.546497] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.242 [2024-05-15 01:09:14.550138] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.242 [2024-05-15 01:09:14.559152] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.242 [2024-05-15 01:09:14.559703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.242 [2024-05-15 01:09:14.559946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.242 [2024-05-15 01:09:14.559975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.242 [2024-05-15 01:09:14.559992] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.242 [2024-05-15 01:09:14.560233] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.242 [2024-05-15 01:09:14.560478] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.242 [2024-05-15 01:09:14.560501] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.242 [2024-05-15 01:09:14.560516] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.242 [2024-05-15 01:09:14.564159] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.242 [2024-05-15 01:09:14.573205] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.242 [2024-05-15 01:09:14.573669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.242 [2024-05-15 01:09:14.573835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.242 [2024-05-15 01:09:14.573859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.242 [2024-05-15 01:09:14.573874] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.242 [2024-05-15 01:09:14.574153] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.242 [2024-05-15 01:09:14.574400] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.242 [2024-05-15 01:09:14.574423] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.242 [2024-05-15 01:09:14.574438] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.242 [2024-05-15 01:09:14.578080] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.242 [2024-05-15 01:09:14.587304] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.242 [2024-05-15 01:09:14.587777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.242 [2024-05-15 01:09:14.588054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.242 [2024-05-15 01:09:14.588079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.242 [2024-05-15 01:09:14.588095] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.242 [2024-05-15 01:09:14.588349] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.242 [2024-05-15 01:09:14.588606] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.242 [2024-05-15 01:09:14.588629] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.242 [2024-05-15 01:09:14.588644] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.242 [2024-05-15 01:09:14.592290] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.242 [2024-05-15 01:09:14.601302] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.242 [2024-05-15 01:09:14.601759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.242 [2024-05-15 01:09:14.601947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.242 [2024-05-15 01:09:14.601978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.242 [2024-05-15 01:09:14.601995] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.242 [2024-05-15 01:09:14.602237] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.242 [2024-05-15 01:09:14.602483] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.242 [2024-05-15 01:09:14.602506] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.243 [2024-05-15 01:09:14.602521] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.243 [2024-05-15 01:09:14.606163] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.243 [2024-05-15 01:09:14.615398] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.243 [2024-05-15 01:09:14.615857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.243 [2024-05-15 01:09:14.616073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.243 [2024-05-15 01:09:14.616102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.243 [2024-05-15 01:09:14.616120] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.243 [2024-05-15 01:09:14.616361] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.243 [2024-05-15 01:09:14.616613] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.243 [2024-05-15 01:09:14.616637] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.243 [2024-05-15 01:09:14.616652] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.243 [2024-05-15 01:09:14.620298] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.243 [2024-05-15 01:09:14.629342] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.243 [2024-05-15 01:09:14.629816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.243 [2024-05-15 01:09:14.629982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.243 [2024-05-15 01:09:14.630007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.243 [2024-05-15 01:09:14.630022] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.243 [2024-05-15 01:09:14.630270] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.243 [2024-05-15 01:09:14.630516] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.243 [2024-05-15 01:09:14.630538] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.243 [2024-05-15 01:09:14.630553] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.505 [2024-05-15 01:09:14.634228] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.505 [2024-05-15 01:09:14.643279] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.505 [2024-05-15 01:09:14.643729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.505 [2024-05-15 01:09:14.643964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.505 [2024-05-15 01:09:14.644001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.505 [2024-05-15 01:09:14.644018] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.505 [2024-05-15 01:09:14.644261] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.505 [2024-05-15 01:09:14.644506] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.505 [2024-05-15 01:09:14.644529] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.505 [2024-05-15 01:09:14.644544] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.505 [2024-05-15 01:09:14.648186] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.505 [2024-05-15 01:09:14.657193] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.505 [2024-05-15 01:09:14.657845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.505 [2024-05-15 01:09:14.658103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.505 [2024-05-15 01:09:14.658131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.505 [2024-05-15 01:09:14.658148] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.505 [2024-05-15 01:09:14.658389] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.505 [2024-05-15 01:09:14.658635] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.505 [2024-05-15 01:09:14.658664] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.505 [2024-05-15 01:09:14.658680] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.505 [2024-05-15 01:09:14.662322] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.505 [2024-05-15 01:09:14.671127] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.505 [2024-05-15 01:09:14.671583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.505 [2024-05-15 01:09:14.671781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.505 [2024-05-15 01:09:14.671806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.505 [2024-05-15 01:09:14.671821] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.505 [2024-05-15 01:09:14.672098] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.505 [2024-05-15 01:09:14.672344] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.505 [2024-05-15 01:09:14.672367] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.505 [2024-05-15 01:09:14.672383] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.505 [2024-05-15 01:09:14.676024] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.505 [2024-05-15 01:09:14.685037] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.505 [2024-05-15 01:09:14.685483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.505 [2024-05-15 01:09:14.685749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.505 [2024-05-15 01:09:14.685776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.505 [2024-05-15 01:09:14.685793] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.505 [2024-05-15 01:09:14.686058] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.505 [2024-05-15 01:09:14.686304] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.505 [2024-05-15 01:09:14.686327] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.505 [2024-05-15 01:09:14.686342] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.505 [2024-05-15 01:09:14.689991] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.505 [2024-05-15 01:09:14.699014] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.505 [2024-05-15 01:09:14.699488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.505 [2024-05-15 01:09:14.699724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.505 [2024-05-15 01:09:14.699751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.505 [2024-05-15 01:09:14.699768] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.505 [2024-05-15 01:09:14.700026] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.505 [2024-05-15 01:09:14.700279] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.505 [2024-05-15 01:09:14.700306] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.505 [2024-05-15 01:09:14.700328] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.505 [2024-05-15 01:09:14.704047] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.505 [2024-05-15 01:09:14.713101] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.505 [2024-05-15 01:09:14.713531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.505 [2024-05-15 01:09:14.713807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.505 [2024-05-15 01:09:14.713858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.505 [2024-05-15 01:09:14.713875] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.505 [2024-05-15 01:09:14.714125] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.505 [2024-05-15 01:09:14.714372] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.505 [2024-05-15 01:09:14.714395] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.505 [2024-05-15 01:09:14.714410] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.505 [2024-05-15 01:09:14.718049] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.505 [2024-05-15 01:09:14.727057] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.505 [2024-05-15 01:09:14.727582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.505 [2024-05-15 01:09:14.727814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.505 [2024-05-15 01:09:14.727842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.505 [2024-05-15 01:09:14.727858] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.505 [2024-05-15 01:09:14.728111] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.506 [2024-05-15 01:09:14.728357] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.506 [2024-05-15 01:09:14.728380] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.506 [2024-05-15 01:09:14.728395] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.506 [2024-05-15 01:09:14.732043] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.506 [2024-05-15 01:09:14.741071] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.506 [2024-05-15 01:09:14.741563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.506 [2024-05-15 01:09:14.741955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.506 [2024-05-15 01:09:14.742002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.506 [2024-05-15 01:09:14.742019] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.506 [2024-05-15 01:09:14.742260] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.506 [2024-05-15 01:09:14.742505] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.506 [2024-05-15 01:09:14.742528] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.506 [2024-05-15 01:09:14.742543] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.506 [2024-05-15 01:09:14.746188] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.506 [2024-05-15 01:09:14.755004] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.506 [2024-05-15 01:09:14.755449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.506 [2024-05-15 01:09:14.755745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.506 [2024-05-15 01:09:14.755775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.506 [2024-05-15 01:09:14.755792] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.506 [2024-05-15 01:09:14.756046] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.506 [2024-05-15 01:09:14.756292] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.506 [2024-05-15 01:09:14.756315] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.506 [2024-05-15 01:09:14.756331] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.506 [2024-05-15 01:09:14.759972] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.506 [2024-05-15 01:09:14.768988] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.506 [2024-05-15 01:09:14.769481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.506 [2024-05-15 01:09:14.769698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.506 [2024-05-15 01:09:14.769722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.506 [2024-05-15 01:09:14.769736] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.506 [2024-05-15 01:09:14.769999] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.506 [2024-05-15 01:09:14.770262] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.506 [2024-05-15 01:09:14.770286] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.506 [2024-05-15 01:09:14.770302] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.506 [2024-05-15 01:09:14.773941] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.506 [2024-05-15 01:09:14.782959] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.506 [2024-05-15 01:09:14.783427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.506 [2024-05-15 01:09:14.783684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.506 [2024-05-15 01:09:14.783711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.506 [2024-05-15 01:09:14.783728] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.506 [2024-05-15 01:09:14.783981] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.506 [2024-05-15 01:09:14.784228] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.506 [2024-05-15 01:09:14.784251] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.506 [2024-05-15 01:09:14.784266] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.506 [2024-05-15 01:09:14.787900] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.506 [2024-05-15 01:09:14.796915] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.506 [2024-05-15 01:09:14.797404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.506 [2024-05-15 01:09:14.797718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.506 [2024-05-15 01:09:14.797777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.506 [2024-05-15 01:09:14.797794] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.506 [2024-05-15 01:09:14.798044] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.506 [2024-05-15 01:09:14.798290] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.506 [2024-05-15 01:09:14.798313] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.506 [2024-05-15 01:09:14.798329] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.506 [2024-05-15 01:09:14.801970] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.506 [2024-05-15 01:09:14.810981] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.506 [2024-05-15 01:09:14.811432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.506 [2024-05-15 01:09:14.811760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.506 [2024-05-15 01:09:14.811818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.506 [2024-05-15 01:09:14.811834] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.506 [2024-05-15 01:09:14.812086] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.506 [2024-05-15 01:09:14.812332] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.506 [2024-05-15 01:09:14.812354] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.506 [2024-05-15 01:09:14.812370] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.506 [2024-05-15 01:09:14.816010] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.506 [2024-05-15 01:09:14.825030] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.506 [2024-05-15 01:09:14.825509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.506 [2024-05-15 01:09:14.825716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.506 [2024-05-15 01:09:14.825743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.506 [2024-05-15 01:09:14.825760] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.506 [2024-05-15 01:09:14.826012] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.506 [2024-05-15 01:09:14.826258] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.506 [2024-05-15 01:09:14.826281] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.506 [2024-05-15 01:09:14.826296] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.506 [2024-05-15 01:09:14.829958] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.506 [2024-05-15 01:09:14.838980] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.506 [2024-05-15 01:09:14.839467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.506 [2024-05-15 01:09:14.839751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.506 [2024-05-15 01:09:14.839779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.506 [2024-05-15 01:09:14.839795] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.506 [2024-05-15 01:09:14.840047] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.506 [2024-05-15 01:09:14.840293] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.506 [2024-05-15 01:09:14.840316] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.506 [2024-05-15 01:09:14.840331] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.506 [2024-05-15 01:09:14.843972] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.506 [2024-05-15 01:09:14.852997] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.506 [2024-05-15 01:09:14.853476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.506 [2024-05-15 01:09:14.853712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.506 [2024-05-15 01:09:14.853737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.506 [2024-05-15 01:09:14.853752] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.506 [2024-05-15 01:09:14.854028] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.506 [2024-05-15 01:09:14.854275] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.506 [2024-05-15 01:09:14.854298] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.506 [2024-05-15 01:09:14.854312] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.506 [2024-05-15 01:09:14.857953] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.506 [2024-05-15 01:09:14.866978] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.507 [2024-05-15 01:09:14.867501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.507 [2024-05-15 01:09:14.867728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.507 [2024-05-15 01:09:14.867777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.507 [2024-05-15 01:09:14.867794] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.507 [2024-05-15 01:09:14.868046] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.507 [2024-05-15 01:09:14.868292] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.507 [2024-05-15 01:09:14.868315] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.507 [2024-05-15 01:09:14.868330] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.507 [2024-05-15 01:09:14.871981] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.507 [2024-05-15 01:09:14.880993] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.507 [2024-05-15 01:09:14.881530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.507 [2024-05-15 01:09:14.881735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.507 [2024-05-15 01:09:14.881767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.507 [2024-05-15 01:09:14.881783] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.507 [2024-05-15 01:09:14.882047] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.507 [2024-05-15 01:09:14.882293] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.507 [2024-05-15 01:09:14.882316] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.507 [2024-05-15 01:09:14.882331] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.507 [2024-05-15 01:09:14.885973] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.507 [2024-05-15 01:09:14.895023] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.507 [2024-05-15 01:09:14.895467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.507 [2024-05-15 01:09:14.895878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.507 [2024-05-15 01:09:14.895942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.507 [2024-05-15 01:09:14.895962] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.507 [2024-05-15 01:09:14.896203] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.507 [2024-05-15 01:09:14.896448] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.507 [2024-05-15 01:09:14.896471] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.507 [2024-05-15 01:09:14.896486] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.769 [2024-05-15 01:09:14.900146] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.769 [2024-05-15 01:09:14.908983] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.769 [2024-05-15 01:09:14.909449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.769 [2024-05-15 01:09:14.909665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.769 [2024-05-15 01:09:14.909692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.769 [2024-05-15 01:09:14.909709] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.769 [2024-05-15 01:09:14.909963] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.769 [2024-05-15 01:09:14.910210] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.769 [2024-05-15 01:09:14.910233] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.769 [2024-05-15 01:09:14.910248] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.769 [2024-05-15 01:09:14.913878] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.769 [2024-05-15 01:09:14.922885] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.769 [2024-05-15 01:09:14.923412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.769 [2024-05-15 01:09:14.923692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.769 [2024-05-15 01:09:14.923742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.769 [2024-05-15 01:09:14.923766] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.769 [2024-05-15 01:09:14.924020] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.769 [2024-05-15 01:09:14.924266] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.769 [2024-05-15 01:09:14.924290] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.769 [2024-05-15 01:09:14.924305] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.769 [2024-05-15 01:09:14.927944] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.769 [2024-05-15 01:09:14.936951] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.769 [2024-05-15 01:09:14.937420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.769 [2024-05-15 01:09:14.937649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.769 [2024-05-15 01:09:14.937677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.769 [2024-05-15 01:09:14.937694] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.769 [2024-05-15 01:09:14.937945] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.769 [2024-05-15 01:09:14.938191] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.769 [2024-05-15 01:09:14.938214] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.769 [2024-05-15 01:09:14.938229] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.769 [2024-05-15 01:09:14.941861] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.769 [2024-05-15 01:09:14.950890] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.769 [2024-05-15 01:09:14.951580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.769 [2024-05-15 01:09:14.951985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.769 [2024-05-15 01:09:14.952047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.769 [2024-05-15 01:09:14.952064] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.769 [2024-05-15 01:09:14.952306] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.769 [2024-05-15 01:09:14.952551] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.769 [2024-05-15 01:09:14.952574] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.769 [2024-05-15 01:09:14.952589] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.769 [2024-05-15 01:09:14.956273] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.769 [2024-05-15 01:09:14.964957] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.769 [2024-05-15 01:09:14.965387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.769 [2024-05-15 01:09:14.965648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.769 [2024-05-15 01:09:14.965676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.769 [2024-05-15 01:09:14.965693] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.769 [2024-05-15 01:09:14.965952] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.769 [2024-05-15 01:09:14.966198] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.769 [2024-05-15 01:09:14.966222] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.769 [2024-05-15 01:09:14.966237] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.769 [2024-05-15 01:09:14.969909] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.769 [2024-05-15 01:09:14.978923] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.769 [2024-05-15 01:09:14.979379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.769 [2024-05-15 01:09:14.979600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.769 [2024-05-15 01:09:14.979624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.769 [2024-05-15 01:09:14.979639] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.769 [2024-05-15 01:09:14.979880] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.769 [2024-05-15 01:09:14.980135] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.769 [2024-05-15 01:09:14.980159] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.769 [2024-05-15 01:09:14.980174] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.769 [2024-05-15 01:09:14.983809] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.769 [2024-05-15 01:09:14.993042] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.769 [2024-05-15 01:09:14.993576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.769 [2024-05-15 01:09:14.993781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.769 [2024-05-15 01:09:14.993808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.769 [2024-05-15 01:09:14.993825] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.769 [2024-05-15 01:09:14.994076] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.769 [2024-05-15 01:09:14.994322] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.769 [2024-05-15 01:09:14.994345] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.769 [2024-05-15 01:09:14.994360] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.769 [2024-05-15 01:09:14.998002] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.769 [2024-05-15 01:09:15.007021] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.769 [2024-05-15 01:09:15.007498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.769 [2024-05-15 01:09:15.007771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.769 [2024-05-15 01:09:15.007801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.769 [2024-05-15 01:09:15.007818] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.769 [2024-05-15 01:09:15.008072] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.769 [2024-05-15 01:09:15.008325] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.769 [2024-05-15 01:09:15.008349] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.769 [2024-05-15 01:09:15.008364] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.769 [2024-05-15 01:09:15.012219] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.769 [2024-05-15 01:09:15.021034] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.769 [2024-05-15 01:09:15.021522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.769 [2024-05-15 01:09:15.021761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.769 [2024-05-15 01:09:15.021789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.770 [2024-05-15 01:09:15.021806] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.770 [2024-05-15 01:09:15.022058] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.770 [2024-05-15 01:09:15.022301] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.770 [2024-05-15 01:09:15.022325] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.770 [2024-05-15 01:09:15.022340] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.770 [2024-05-15 01:09:15.025991] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.770 [2024-05-15 01:09:15.035020] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.770 [2024-05-15 01:09:15.035544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.770 [2024-05-15 01:09:15.035815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.770 [2024-05-15 01:09:15.035867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.770 [2024-05-15 01:09:15.035884] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.770 [2024-05-15 01:09:15.036138] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.770 [2024-05-15 01:09:15.036385] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.770 [2024-05-15 01:09:15.036408] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.770 [2024-05-15 01:09:15.036423] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.770 [2024-05-15 01:09:15.040071] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.770 [2024-05-15 01:09:15.049112] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.770 [2024-05-15 01:09:15.049644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.770 [2024-05-15 01:09:15.049866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.770 [2024-05-15 01:09:15.049894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.770 [2024-05-15 01:09:15.049911] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.770 [2024-05-15 01:09:15.050160] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.770 [2024-05-15 01:09:15.050407] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.770 [2024-05-15 01:09:15.050436] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.770 [2024-05-15 01:09:15.050451] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.770 [2024-05-15 01:09:15.054100] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.770 [2024-05-15 01:09:15.063138] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.770 [2024-05-15 01:09:15.063796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.770 [2024-05-15 01:09:15.064053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.770 [2024-05-15 01:09:15.064082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.770 [2024-05-15 01:09:15.064099] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.770 [2024-05-15 01:09:15.064340] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.770 [2024-05-15 01:09:15.064591] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.770 [2024-05-15 01:09:15.064614] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.770 [2024-05-15 01:09:15.064629] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.770 [2024-05-15 01:09:15.068288] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.770 [2024-05-15 01:09:15.077122] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.770 [2024-05-15 01:09:15.077604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.770 [2024-05-15 01:09:15.077858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.770 [2024-05-15 01:09:15.077886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.770 [2024-05-15 01:09:15.077903] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.770 [2024-05-15 01:09:15.078154] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.770 [2024-05-15 01:09:15.078401] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.770 [2024-05-15 01:09:15.078424] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.770 [2024-05-15 01:09:15.078439] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.770 [2024-05-15 01:09:15.082087] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.770 [2024-05-15 01:09:15.091146] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.770 [2024-05-15 01:09:15.091631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.770 [2024-05-15 01:09:15.091849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.770 [2024-05-15 01:09:15.091877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.770 [2024-05-15 01:09:15.091894] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.770 [2024-05-15 01:09:15.092147] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.770 [2024-05-15 01:09:15.092394] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.770 [2024-05-15 01:09:15.092418] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.770 [2024-05-15 01:09:15.092439] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.770 [2024-05-15 01:09:15.096087] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.770 [2024-05-15 01:09:15.105119] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.770 [2024-05-15 01:09:15.105605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.770 [2024-05-15 01:09:15.105785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.770 [2024-05-15 01:09:15.105812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.770 [2024-05-15 01:09:15.105829] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.770 [2024-05-15 01:09:15.106080] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.770 [2024-05-15 01:09:15.106326] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.770 [2024-05-15 01:09:15.106350] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.770 [2024-05-15 01:09:15.106365] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.770 [2024-05-15 01:09:15.110012] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.770 [2024-05-15 01:09:15.119037] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.770 [2024-05-15 01:09:15.119487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.770 [2024-05-15 01:09:15.119702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.770 [2024-05-15 01:09:15.119727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.770 [2024-05-15 01:09:15.119743] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.770 [2024-05-15 01:09:15.119998] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.770 [2024-05-15 01:09:15.120244] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.770 [2024-05-15 01:09:15.120267] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.770 [2024-05-15 01:09:15.120282] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.770 [2024-05-15 01:09:15.123923] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.770 [2024-05-15 01:09:15.133081] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.770 [2024-05-15 01:09:15.133769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.770 [2024-05-15 01:09:15.133990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.770 [2024-05-15 01:09:15.134016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.770 [2024-05-15 01:09:15.134032] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.770 [2024-05-15 01:09:15.134263] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.770 [2024-05-15 01:09:15.134532] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.770 [2024-05-15 01:09:15.134556] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.770 [2024-05-15 01:09:15.134572] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.770 [2024-05-15 01:09:15.138326] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.770 [2024-05-15 01:09:15.147156] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:02.770 [2024-05-15 01:09:15.147624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.770 [2024-05-15 01:09:15.147820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.770 [2024-05-15 01:09:15.147845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:02.770 [2024-05-15 01:09:15.147860] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:02.770 [2024-05-15 01:09:15.148087] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:02.770 [2024-05-15 01:09:15.148337] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:02.770 [2024-05-15 01:09:15.148356] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:02.770 [2024-05-15 01:09:15.148369] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:02.770 [2024-05-15 01:09:15.152085] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.031 [2024-05-15 01:09:15.161311] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.031 [2024-05-15 01:09:15.161791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.031 [2024-05-15 01:09:15.162017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.031 [2024-05-15 01:09:15.162044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.031 [2024-05-15 01:09:15.162060] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.031 [2024-05-15 01:09:15.162309] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.031 [2024-05-15 01:09:15.162555] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.031 [2024-05-15 01:09:15.162578] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.031 [2024-05-15 01:09:15.162593] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.031 [2024-05-15 01:09:15.166300] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.031 [2024-05-15 01:09:15.175417] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.031 [2024-05-15 01:09:15.175898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.031 [2024-05-15 01:09:15.176129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.031 [2024-05-15 01:09:15.176154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.031 [2024-05-15 01:09:15.176170] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.031 [2024-05-15 01:09:15.176416] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.031 [2024-05-15 01:09:15.176661] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.031 [2024-05-15 01:09:15.176684] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.031 [2024-05-15 01:09:15.176698] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.031 [2024-05-15 01:09:15.180391] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.031 [2024-05-15 01:09:15.189477] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.031 [2024-05-15 01:09:15.189936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.031 [2024-05-15 01:09:15.190157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.031 [2024-05-15 01:09:15.190182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.031 [2024-05-15 01:09:15.190198] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.031 [2024-05-15 01:09:15.190449] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.031 [2024-05-15 01:09:15.190694] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.031 [2024-05-15 01:09:15.190718] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.031 [2024-05-15 01:09:15.190733] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.031 [2024-05-15 01:09:15.194008] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.031 [2024-05-15 01:09:15.203582] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.031 [2024-05-15 01:09:15.204081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.032 [2024-05-15 01:09:15.204254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.032 [2024-05-15 01:09:15.204279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.032 [2024-05-15 01:09:15.204294] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.032 [2024-05-15 01:09:15.204542] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.032 [2024-05-15 01:09:15.204796] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.032 [2024-05-15 01:09:15.204820] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.032 [2024-05-15 01:09:15.204835] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.032 [2024-05-15 01:09:15.208587] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.032 [2024-05-15 01:09:15.217527] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.032 [2024-05-15 01:09:15.218008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.032 [2024-05-15 01:09:15.218181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.032 [2024-05-15 01:09:15.218206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.032 [2024-05-15 01:09:15.218221] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.032 [2024-05-15 01:09:15.218482] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.032 [2024-05-15 01:09:15.218727] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.032 [2024-05-15 01:09:15.218749] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.032 [2024-05-15 01:09:15.218764] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.032 [2024-05-15 01:09:15.222418] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.032 [2024-05-15 01:09:15.231423] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.032 [2024-05-15 01:09:15.231849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.032 [2024-05-15 01:09:15.232118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.032 [2024-05-15 01:09:15.232145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.032 [2024-05-15 01:09:15.232160] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.032 [2024-05-15 01:09:15.232417] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.032 [2024-05-15 01:09:15.232662] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.032 [2024-05-15 01:09:15.232685] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.032 [2024-05-15 01:09:15.232700] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.032 [2024-05-15 01:09:15.236346] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.032 [2024-05-15 01:09:15.245368] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.032 [2024-05-15 01:09:15.245868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.032 [2024-05-15 01:09:15.246095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.032 [2024-05-15 01:09:15.246125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.032 [2024-05-15 01:09:15.246142] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.032 [2024-05-15 01:09:15.246383] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.032 [2024-05-15 01:09:15.246629] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.032 [2024-05-15 01:09:15.246652] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.032 [2024-05-15 01:09:15.246666] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.032 [2024-05-15 01:09:15.250315] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.032 [2024-05-15 01:09:15.259340] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.032 [2024-05-15 01:09:15.259984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.032 [2024-05-15 01:09:15.260194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.032 [2024-05-15 01:09:15.260222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.032 [2024-05-15 01:09:15.260239] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.032 [2024-05-15 01:09:15.260480] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.032 [2024-05-15 01:09:15.260726] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.032 [2024-05-15 01:09:15.260749] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.032 [2024-05-15 01:09:15.260764] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.032 [2024-05-15 01:09:15.264411] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.032 [2024-05-15 01:09:15.273434] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.032 [2024-05-15 01:09:15.273961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.032 [2024-05-15 01:09:15.274196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.032 [2024-05-15 01:09:15.274224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.032 [2024-05-15 01:09:15.274241] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.032 [2024-05-15 01:09:15.274482] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.032 [2024-05-15 01:09:15.274728] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.032 [2024-05-15 01:09:15.274751] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.032 [2024-05-15 01:09:15.274766] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.032 [2024-05-15 01:09:15.278439] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.032 [2024-05-15 01:09:15.287460] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.032 [2024-05-15 01:09:15.287912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.032 [2024-05-15 01:09:15.288137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.032 [2024-05-15 01:09:15.288167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.032 [2024-05-15 01:09:15.288184] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.032 [2024-05-15 01:09:15.288426] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.032 [2024-05-15 01:09:15.288672] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.032 [2024-05-15 01:09:15.288695] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.032 [2024-05-15 01:09:15.288710] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.032 [2024-05-15 01:09:15.292359] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.032 [2024-05-15 01:09:15.301379] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.032 [2024-05-15 01:09:15.301871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.032 [2024-05-15 01:09:15.302086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.032 [2024-05-15 01:09:15.302112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.032 [2024-05-15 01:09:15.302127] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.032 [2024-05-15 01:09:15.302374] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.032 [2024-05-15 01:09:15.302631] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.032 [2024-05-15 01:09:15.302654] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.032 [2024-05-15 01:09:15.302669] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.032 [2024-05-15 01:09:15.306313] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.032 [2024-05-15 01:09:15.315338] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.032 [2024-05-15 01:09:15.315807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.032 [2024-05-15 01:09:15.316034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.032 [2024-05-15 01:09:15.316060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.032 [2024-05-15 01:09:15.316080] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.032 [2024-05-15 01:09:15.316342] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.032 [2024-05-15 01:09:15.316588] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.032 [2024-05-15 01:09:15.316611] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.032 [2024-05-15 01:09:15.316626] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.032 [2024-05-15 01:09:15.320267] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.032 [2024-05-15 01:09:15.329275] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.032 [2024-05-15 01:09:15.329754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.032 [2024-05-15 01:09:15.329995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.032 [2024-05-15 01:09:15.330025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.032 [2024-05-15 01:09:15.330041] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.032 [2024-05-15 01:09:15.330282] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.033 [2024-05-15 01:09:15.330528] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.033 [2024-05-15 01:09:15.330551] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.033 [2024-05-15 01:09:15.330566] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.033 [2024-05-15 01:09:15.334205] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.033 [2024-05-15 01:09:15.343240] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.033 [2024-05-15 01:09:15.343713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.033 [2024-05-15 01:09:15.343892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.033 [2024-05-15 01:09:15.343919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.033 [2024-05-15 01:09:15.343945] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.033 [2024-05-15 01:09:15.344187] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.033 [2024-05-15 01:09:15.344433] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.033 [2024-05-15 01:09:15.344455] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.033 [2024-05-15 01:09:15.344470] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.033 [2024-05-15 01:09:15.348131] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.033 [2024-05-15 01:09:15.357142] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.033 [2024-05-15 01:09:15.357624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.033 [2024-05-15 01:09:15.357833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.033 [2024-05-15 01:09:15.357861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.033 [2024-05-15 01:09:15.357878] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.033 [2024-05-15 01:09:15.358136] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.033 [2024-05-15 01:09:15.358384] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.033 [2024-05-15 01:09:15.358407] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.033 [2024-05-15 01:09:15.358422] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.033 [2024-05-15 01:09:15.362062] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.033 [2024-05-15 01:09:15.371073] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.033 [2024-05-15 01:09:15.371519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.033 [2024-05-15 01:09:15.371746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.033 [2024-05-15 01:09:15.371770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.033 [2024-05-15 01:09:15.371785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.033 [2024-05-15 01:09:15.372050] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.033 [2024-05-15 01:09:15.372317] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.033 [2024-05-15 01:09:15.372340] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.033 [2024-05-15 01:09:15.372355] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.033 [2024-05-15 01:09:15.375997] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.033 [2024-05-15 01:09:15.385003] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.033 [2024-05-15 01:09:15.385491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.033 [2024-05-15 01:09:15.385720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.033 [2024-05-15 01:09:15.385748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.033 [2024-05-15 01:09:15.385765] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.033 [2024-05-15 01:09:15.386017] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.033 [2024-05-15 01:09:15.386263] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.033 [2024-05-15 01:09:15.386286] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.033 [2024-05-15 01:09:15.386301] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.033 [2024-05-15 01:09:15.389937] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.033 [2024-05-15 01:09:15.398942] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.033 [2024-05-15 01:09:15.399421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.033 [2024-05-15 01:09:15.399730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.033 [2024-05-15 01:09:15.399783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.033 [2024-05-15 01:09:15.399800] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.033 [2024-05-15 01:09:15.400053] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.033 [2024-05-15 01:09:15.400305] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.033 [2024-05-15 01:09:15.400329] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.033 [2024-05-15 01:09:15.400344] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.033 [2024-05-15 01:09:15.403998] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.033 [2024-05-15 01:09:15.412996] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.033 [2024-05-15 01:09:15.413506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.033 [2024-05-15 01:09:15.413701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.033 [2024-05-15 01:09:15.413726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.033 [2024-05-15 01:09:15.413741] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.033 [2024-05-15 01:09:15.414000] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.033 [2024-05-15 01:09:15.414247] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.033 [2024-05-15 01:09:15.414270] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.033 [2024-05-15 01:09:15.414284] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.033 [2024-05-15 01:09:15.417914] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.294 [2024-05-15 01:09:15.426998] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.294 [2024-05-15 01:09:15.427611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.294 [2024-05-15 01:09:15.427863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.294 [2024-05-15 01:09:15.427905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.294 [2024-05-15 01:09:15.427922] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.294 [2024-05-15 01:09:15.428199] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.294 [2024-05-15 01:09:15.428445] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.294 [2024-05-15 01:09:15.428468] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.294 [2024-05-15 01:09:15.428483] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.294 [2024-05-15 01:09:15.432141] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.294 [2024-05-15 01:09:15.440925] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.294 [2024-05-15 01:09:15.441378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.294 [2024-05-15 01:09:15.441664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.294 [2024-05-15 01:09:15.441688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.294 [2024-05-15 01:09:15.441717] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.294 [2024-05-15 01:09:15.441972] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.294 [2024-05-15 01:09:15.442219] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.294 [2024-05-15 01:09:15.442247] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.294 [2024-05-15 01:09:15.442263] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.294 [2024-05-15 01:09:15.445895] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.294 [2024-05-15 01:09:15.454899] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.294 [2024-05-15 01:09:15.455376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.294 [2024-05-15 01:09:15.455653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.294 [2024-05-15 01:09:15.455678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.294 [2024-05-15 01:09:15.455693] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.295 [2024-05-15 01:09:15.455969] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.295 [2024-05-15 01:09:15.456226] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.295 [2024-05-15 01:09:15.456249] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.295 [2024-05-15 01:09:15.456264] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.295 [2024-05-15 01:09:15.459968] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.295 [2024-05-15 01:09:15.468806] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.295 [2024-05-15 01:09:15.469265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.295 [2024-05-15 01:09:15.469537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.295 [2024-05-15 01:09:15.469566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.295 [2024-05-15 01:09:15.469583] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.295 [2024-05-15 01:09:15.469825] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.295 [2024-05-15 01:09:15.470083] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.295 [2024-05-15 01:09:15.470107] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.295 [2024-05-15 01:09:15.470122] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.295 [2024-05-15 01:09:15.473753] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.295 [2024-05-15 01:09:15.482762] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.295 [2024-05-15 01:09:15.483221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.295 [2024-05-15 01:09:15.483437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.295 [2024-05-15 01:09:15.483465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.295 [2024-05-15 01:09:15.483483] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.295 [2024-05-15 01:09:15.483724] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.295 [2024-05-15 01:09:15.483980] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.295 [2024-05-15 01:09:15.484004] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.295 [2024-05-15 01:09:15.484025] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.295 [2024-05-15 01:09:15.487657] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.295 [2024-05-15 01:09:15.496661] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.295 [2024-05-15 01:09:15.497141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.295 [2024-05-15 01:09:15.497419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.295 [2024-05-15 01:09:15.497447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.295 [2024-05-15 01:09:15.497464] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.295 [2024-05-15 01:09:15.497705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.295 [2024-05-15 01:09:15.497961] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.295 [2024-05-15 01:09:15.497985] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.295 [2024-05-15 01:09:15.498000] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.295 [2024-05-15 01:09:15.501630] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.295 [2024-05-15 01:09:15.510637] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.295 [2024-05-15 01:09:15.511107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.295 [2024-05-15 01:09:15.511346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.295 [2024-05-15 01:09:15.511374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.295 [2024-05-15 01:09:15.511391] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.295 [2024-05-15 01:09:15.511632] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.295 [2024-05-15 01:09:15.511877] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.295 [2024-05-15 01:09:15.511900] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.295 [2024-05-15 01:09:15.511915] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.295 [2024-05-15 01:09:15.515556] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.295 [2024-05-15 01:09:15.524563] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.295 [2024-05-15 01:09:15.525034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.295 [2024-05-15 01:09:15.525270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.295 [2024-05-15 01:09:15.525299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.295 [2024-05-15 01:09:15.525315] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.295 [2024-05-15 01:09:15.525557] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.295 [2024-05-15 01:09:15.525802] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.295 [2024-05-15 01:09:15.525825] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.295 [2024-05-15 01:09:15.525840] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.295 [2024-05-15 01:09:15.529487] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.295 [2024-05-15 01:09:15.538485] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.295 [2024-05-15 01:09:15.538957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.295 [2024-05-15 01:09:15.539134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.295 [2024-05-15 01:09:15.539162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.295 [2024-05-15 01:09:15.539179] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.295 [2024-05-15 01:09:15.539420] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.295 [2024-05-15 01:09:15.539665] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.295 [2024-05-15 01:09:15.539688] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.295 [2024-05-15 01:09:15.539703] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.295 [2024-05-15 01:09:15.543342] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.295 [2024-05-15 01:09:15.552554] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.295 [2024-05-15 01:09:15.553015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.295 [2024-05-15 01:09:15.553226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.295 [2024-05-15 01:09:15.553255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.295 [2024-05-15 01:09:15.553272] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.295 [2024-05-15 01:09:15.553513] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.295 [2024-05-15 01:09:15.553758] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.295 [2024-05-15 01:09:15.553781] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.295 [2024-05-15 01:09:15.553796] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.295 [2024-05-15 01:09:15.557434] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.295 [2024-05-15 01:09:15.566648] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.295 [2024-05-15 01:09:15.567149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.295 [2024-05-15 01:09:15.567548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.295 [2024-05-15 01:09:15.567596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.295 [2024-05-15 01:09:15.567613] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.295 [2024-05-15 01:09:15.567854] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.295 [2024-05-15 01:09:15.568112] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.295 [2024-05-15 01:09:15.568136] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.295 [2024-05-15 01:09:15.568151] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.295 [2024-05-15 01:09:15.571785] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.295 [2024-05-15 01:09:15.580587] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.295 [2024-05-15 01:09:15.581056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.295 [2024-05-15 01:09:15.581400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.295 [2024-05-15 01:09:15.581448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.295 [2024-05-15 01:09:15.581465] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.295 [2024-05-15 01:09:15.581707] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.295 [2024-05-15 01:09:15.581964] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.295 [2024-05-15 01:09:15.581988] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.295 [2024-05-15 01:09:15.582003] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.295 [2024-05-15 01:09:15.585635] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.295 [2024-05-15 01:09:15.594644] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.295 [2024-05-15 01:09:15.595105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.296 [2024-05-15 01:09:15.595326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.296 [2024-05-15 01:09:15.595352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.296 [2024-05-15 01:09:15.595367] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.296 [2024-05-15 01:09:15.595638] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.296 [2024-05-15 01:09:15.595883] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.296 [2024-05-15 01:09:15.595906] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.296 [2024-05-15 01:09:15.595921] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.296 [2024-05-15 01:09:15.599564] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.296 [2024-05-15 01:09:15.608570] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.296 [2024-05-15 01:09:15.609048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.296 [2024-05-15 01:09:15.609231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.296 [2024-05-15 01:09:15.609256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.296 [2024-05-15 01:09:15.609287] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.296 [2024-05-15 01:09:15.609530] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.296 [2024-05-15 01:09:15.609775] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.296 [2024-05-15 01:09:15.609798] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.296 [2024-05-15 01:09:15.609813] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.296 [2024-05-15 01:09:15.613454] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.296 [2024-05-15 01:09:15.622675] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.296 [2024-05-15 01:09:15.623178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.296 [2024-05-15 01:09:15.623357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.296 [2024-05-15 01:09:15.623383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.296 [2024-05-15 01:09:15.623398] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.296 [2024-05-15 01:09:15.623651] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.296 [2024-05-15 01:09:15.623897] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.296 [2024-05-15 01:09:15.623920] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.296 [2024-05-15 01:09:15.623945] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.296 [2024-05-15 01:09:15.627580] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.296 [2024-05-15 01:09:15.636594] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.296 [2024-05-15 01:09:15.637061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.296 [2024-05-15 01:09:15.637299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.296 [2024-05-15 01:09:15.637327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.296 [2024-05-15 01:09:15.637344] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.296 [2024-05-15 01:09:15.637586] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.296 [2024-05-15 01:09:15.637831] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.296 [2024-05-15 01:09:15.637854] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.296 [2024-05-15 01:09:15.637869] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.296 [2024-05-15 01:09:15.641509] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.296 [2024-05-15 01:09:15.650515] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.296 [2024-05-15 01:09:15.651006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.296 [2024-05-15 01:09:15.651244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.296 [2024-05-15 01:09:15.651272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.296 [2024-05-15 01:09:15.651289] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.296 [2024-05-15 01:09:15.651530] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.296 [2024-05-15 01:09:15.651776] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.296 [2024-05-15 01:09:15.651799] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.296 [2024-05-15 01:09:15.651813] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.296 [2024-05-15 01:09:15.655453] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.296 [2024-05-15 01:09:15.664460] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.296 [2024-05-15 01:09:15.664911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.296 [2024-05-15 01:09:15.665107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.296 [2024-05-15 01:09:15.665136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.296 [2024-05-15 01:09:15.665153] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.296 [2024-05-15 01:09:15.665395] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.296 [2024-05-15 01:09:15.665639] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.296 [2024-05-15 01:09:15.665662] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.296 [2024-05-15 01:09:15.665677] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.296 [2024-05-15 01:09:15.669322] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.296 [2024-05-15 01:09:15.678558] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.296 [2024-05-15 01:09:15.679029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.296 [2024-05-15 01:09:15.679316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.296 [2024-05-15 01:09:15.679344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.296 [2024-05-15 01:09:15.679361] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.296 [2024-05-15 01:09:15.679603] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.296 [2024-05-15 01:09:15.679848] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.296 [2024-05-15 01:09:15.679871] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.296 [2024-05-15 01:09:15.679885] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.296 [2024-05-15 01:09:15.683556] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.558 [2024-05-15 01:09:15.692621] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.558 [2024-05-15 01:09:15.693101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.558 [2024-05-15 01:09:15.693319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.558 [2024-05-15 01:09:15.693344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.558 [2024-05-15 01:09:15.693359] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.558 [2024-05-15 01:09:15.693616] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.558 [2024-05-15 01:09:15.693862] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.558 [2024-05-15 01:09:15.693885] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.558 [2024-05-15 01:09:15.693900] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.558 [2024-05-15 01:09:15.697541] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.558 [2024-05-15 01:09:15.706548] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.558 [2024-05-15 01:09:15.707065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.558 [2024-05-15 01:09:15.707277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.558 [2024-05-15 01:09:15.707304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.558 [2024-05-15 01:09:15.707327] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.558 [2024-05-15 01:09:15.707569] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.558 [2024-05-15 01:09:15.707825] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.558 [2024-05-15 01:09:15.707852] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.558 [2024-05-15 01:09:15.707868] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.558 [2024-05-15 01:09:15.711577] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.558 [2024-05-15 01:09:15.720636] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.558 [2024-05-15 01:09:15.721094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.558 [2024-05-15 01:09:15.721311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.558 [2024-05-15 01:09:15.721339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.558 [2024-05-15 01:09:15.721356] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.558 [2024-05-15 01:09:15.721598] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.558 [2024-05-15 01:09:15.721842] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.558 [2024-05-15 01:09:15.721865] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.558 [2024-05-15 01:09:15.721880] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.558 [2024-05-15 01:09:15.725518] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.558 [2024-05-15 01:09:15.734737] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.558 [2024-05-15 01:09:15.735203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.558 [2024-05-15 01:09:15.735444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.558 [2024-05-15 01:09:15.735471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.558 [2024-05-15 01:09:15.735488] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.558 [2024-05-15 01:09:15.735729] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.558 [2024-05-15 01:09:15.735986] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.558 [2024-05-15 01:09:15.736011] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.558 [2024-05-15 01:09:15.736026] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.558 [2024-05-15 01:09:15.739658] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.558 [2024-05-15 01:09:15.748667] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.558 [2024-05-15 01:09:15.749174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.558 [2024-05-15 01:09:15.749406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.558 [2024-05-15 01:09:15.749448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.558 [2024-05-15 01:09:15.749466] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.558 [2024-05-15 01:09:15.749713] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.558 [2024-05-15 01:09:15.749970] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.558 [2024-05-15 01:09:15.749994] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.558 [2024-05-15 01:09:15.750009] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.558 [2024-05-15 01:09:15.753642] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.558 [2024-05-15 01:09:15.762648] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.558 [2024-05-15 01:09:15.763296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.558 [2024-05-15 01:09:15.763589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.558 [2024-05-15 01:09:15.763616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.558 [2024-05-15 01:09:15.763632] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.558 [2024-05-15 01:09:15.763873] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.558 [2024-05-15 01:09:15.764129] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.558 [2024-05-15 01:09:15.764168] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.558 [2024-05-15 01:09:15.764184] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.558 [2024-05-15 01:09:15.767818] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.558 [2024-05-15 01:09:15.776619] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.558 [2024-05-15 01:09:15.777077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.558 [2024-05-15 01:09:15.777252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.558 [2024-05-15 01:09:15.777280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.558 [2024-05-15 01:09:15.777298] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.558 [2024-05-15 01:09:15.777540] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.558 [2024-05-15 01:09:15.777785] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.558 [2024-05-15 01:09:15.777808] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.558 [2024-05-15 01:09:15.777823] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.558 [2024-05-15 01:09:15.781466] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.558 [2024-05-15 01:09:15.790709] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.558 [2024-05-15 01:09:15.791163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.558 [2024-05-15 01:09:15.791457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.558 [2024-05-15 01:09:15.791508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.558 [2024-05-15 01:09:15.791525] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.558 [2024-05-15 01:09:15.791766] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.558 [2024-05-15 01:09:15.792027] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.558 [2024-05-15 01:09:15.792052] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.558 [2024-05-15 01:09:15.792067] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.558 [2024-05-15 01:09:15.795698] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.558 [2024-05-15 01:09:15.804706] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.558 [2024-05-15 01:09:15.805196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.559 [2024-05-15 01:09:15.805512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.559 [2024-05-15 01:09:15.805567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.559 [2024-05-15 01:09:15.805584] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.559 [2024-05-15 01:09:15.805825] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.559 [2024-05-15 01:09:15.806081] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.559 [2024-05-15 01:09:15.806105] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.559 [2024-05-15 01:09:15.806120] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.559 [2024-05-15 01:09:15.809753] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.559 [2024-05-15 01:09:15.818761] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.559 [2024-05-15 01:09:15.819242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.559 [2024-05-15 01:09:15.819454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.559 [2024-05-15 01:09:15.819482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.559 [2024-05-15 01:09:15.819499] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.559 [2024-05-15 01:09:15.819740] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.559 [2024-05-15 01:09:15.819996] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.559 [2024-05-15 01:09:15.820021] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.559 [2024-05-15 01:09:15.820035] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.559 [2024-05-15 01:09:15.823676] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.559 [2024-05-15 01:09:15.832682] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.559 [2024-05-15 01:09:15.833158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.559 [2024-05-15 01:09:15.833443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.559 [2024-05-15 01:09:15.833500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.559 [2024-05-15 01:09:15.833516] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.559 [2024-05-15 01:09:15.833757] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.559 [2024-05-15 01:09:15.834021] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.559 [2024-05-15 01:09:15.834046] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.559 [2024-05-15 01:09:15.834061] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.559 [2024-05-15 01:09:15.837691] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.559 [2024-05-15 01:09:15.846708] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.559 [2024-05-15 01:09:15.847187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.559 [2024-05-15 01:09:15.847468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.559 [2024-05-15 01:09:15.847492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.559 [2024-05-15 01:09:15.847507] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.559 [2024-05-15 01:09:15.847761] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.559 [2024-05-15 01:09:15.848017] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.559 [2024-05-15 01:09:15.848042] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.559 [2024-05-15 01:09:15.848057] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.559 [2024-05-15 01:09:15.851686] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.559 [2024-05-15 01:09:15.860688] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.559 [2024-05-15 01:09:15.861145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.559 [2024-05-15 01:09:15.861419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.559 [2024-05-15 01:09:15.861463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.559 [2024-05-15 01:09:15.861478] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.559 [2024-05-15 01:09:15.861734] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.559 [2024-05-15 01:09:15.861990] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.559 [2024-05-15 01:09:15.862013] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.559 [2024-05-15 01:09:15.862029] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.559 [2024-05-15 01:09:15.865663] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.559 [2024-05-15 01:09:15.874681] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.559 [2024-05-15 01:09:15.875154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.559 [2024-05-15 01:09:15.875424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.559 [2024-05-15 01:09:15.875453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.559 [2024-05-15 01:09:15.875470] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.559 [2024-05-15 01:09:15.875712] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.559 [2024-05-15 01:09:15.875968] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.559 [2024-05-15 01:09:15.875992] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.559 [2024-05-15 01:09:15.876013] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.559 [2024-05-15 01:09:15.879654] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.559 [2024-05-15 01:09:15.888682] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.559 [2024-05-15 01:09:15.889140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.559 [2024-05-15 01:09:15.889354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.559 [2024-05-15 01:09:15.889382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.559 [2024-05-15 01:09:15.889399] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.559 [2024-05-15 01:09:15.889640] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.559 [2024-05-15 01:09:15.889885] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.559 [2024-05-15 01:09:15.889908] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.559 [2024-05-15 01:09:15.889923] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.559 [2024-05-15 01:09:15.893569] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.559 [2024-05-15 01:09:15.902594] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.559 [2024-05-15 01:09:15.903076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.559 [2024-05-15 01:09:15.903304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.559 [2024-05-15 01:09:15.903329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.559 [2024-05-15 01:09:15.903359] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.559 [2024-05-15 01:09:15.903617] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.559 [2024-05-15 01:09:15.903862] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.559 [2024-05-15 01:09:15.903885] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.559 [2024-05-15 01:09:15.903900] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.559 [2024-05-15 01:09:15.907555] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.559 [2024-05-15 01:09:15.916573] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.559 [2024-05-15 01:09:15.917037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.559 [2024-05-15 01:09:15.917338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.559 [2024-05-15 01:09:15.917399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.559 [2024-05-15 01:09:15.917416] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.559 [2024-05-15 01:09:15.917658] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.559 [2024-05-15 01:09:15.917903] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.559 [2024-05-15 01:09:15.917927] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.559 [2024-05-15 01:09:15.917957] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.559 [2024-05-15 01:09:15.921590] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1342785 Killed "${NVMF_APP[@]}" "$@" 00:22:03.559 01:09:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:22:03.559 01:09:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:22:03.559 01:09:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:03.559 01:09:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:03.559 01:09:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:03.559 01:09:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1344098 00:22:03.559 01:09:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:03.559 01:09:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1344098 00:22:03.559 01:09:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 1344098 ']' 00:22:03.559 01:09:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.560 01:09:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:03.560 01:09:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.560 [2024-05-15 01:09:15.930439] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.560 01:09:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:03.560 01:09:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:03.560 [2024-05-15 01:09:15.930882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.560 [2024-05-15 01:09:15.931055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.560 [2024-05-15 01:09:15.931081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.560 [2024-05-15 01:09:15.931096] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.560 [2024-05-15 01:09:15.931341] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.560 [2024-05-15 01:09:15.931560] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.560 [2024-05-15 01:09:15.931582] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.560 [2024-05-15 01:09:15.931597] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.560 [2024-05-15 01:09:15.934764] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.560 [2024-05-15 01:09:15.944044] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.560 [2024-05-15 01:09:15.944533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.560 [2024-05-15 01:09:15.944728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.560 [2024-05-15 01:09:15.944754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.560 [2024-05-15 01:09:15.944769] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.560 [2024-05-15 01:09:15.945036] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.560 [2024-05-15 01:09:15.945258] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.560 [2024-05-15 01:09:15.945294] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.560 [2024-05-15 01:09:15.945312] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.820 [2024-05-15 01:09:15.948602] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.820 [2024-05-15 01:09:15.957553] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.820 [2024-05-15 01:09:15.957975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.820 [2024-05-15 01:09:15.958199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.820 [2024-05-15 01:09:15.958224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.820 [2024-05-15 01:09:15.958239] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.820 [2024-05-15 01:09:15.958495] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.820 [2024-05-15 01:09:15.958705] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.820 [2024-05-15 01:09:15.958725] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.820 [2024-05-15 01:09:15.958737] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.820 [2024-05-15 01:09:15.962121] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.820 [2024-05-15 01:09:15.970885] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.820 [2024-05-15 01:09:15.971414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.820 [2024-05-15 01:09:15.971619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.820 [2024-05-15 01:09:15.971644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.820 [2024-05-15 01:09:15.971660] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.820 [2024-05-15 01:09:15.971925] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.821 [2024-05-15 01:09:15.972164] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.821 [2024-05-15 01:09:15.972185] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.821 [2024-05-15 01:09:15.972199] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.821 [2024-05-15 01:09:15.973532] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:22:03.821 [2024-05-15 01:09:15.973605] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.821 [2024-05-15 01:09:15.975461] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.821 [2024-05-15 01:09:15.984488] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.821 [2024-05-15 01:09:15.985002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.821 [2024-05-15 01:09:15.985184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.821 [2024-05-15 01:09:15.985209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.821 [2024-05-15 01:09:15.985227] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.821 [2024-05-15 01:09:15.985463] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.821 [2024-05-15 01:09:15.985669] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.821 [2024-05-15 01:09:15.985689] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.821 [2024-05-15 01:09:15.985702] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.821 [2024-05-15 01:09:15.988784] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.821 [2024-05-15 01:09:15.997881] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.821 [2024-05-15 01:09:15.998315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.821 [2024-05-15 01:09:15.998483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.821 [2024-05-15 01:09:15.998509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.821 [2024-05-15 01:09:15.998524] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.821 [2024-05-15 01:09:15.998789] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.821 [2024-05-15 01:09:15.999016] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.821 [2024-05-15 01:09:15.999038] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.821 [2024-05-15 01:09:15.999052] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.821 [2024-05-15 01:09:16.002157] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.821 [2024-05-15 01:09:16.011370] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.821 [2024-05-15 01:09:16.011820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.821 [2024-05-15 01:09:16.011985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.821 [2024-05-15 01:09:16.012012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.821 [2024-05-15 01:09:16.012027] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.821 [2024-05-15 01:09:16.012285] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.821 [2024-05-15 01:09:16.012487] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.821 [2024-05-15 01:09:16.012506] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.821 [2024-05-15 01:09:16.012519] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.821 [2024-05-15 01:09:16.015615] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.821 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.821 [2024-05-15 01:09:16.025334] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.821 [2024-05-15 01:09:16.025767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.821 [2024-05-15 01:09:16.026000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.821 [2024-05-15 01:09:16.026026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.821 [2024-05-15 01:09:16.026042] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.821 [2024-05-15 01:09:16.026289] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.821 [2024-05-15 01:09:16.026535] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.821 [2024-05-15 01:09:16.026563] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.821 [2024-05-15 01:09:16.026580] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.821 [2024-05-15 01:09:16.030188] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.821 [2024-05-15 01:09:16.039192] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.821 [2024-05-15 01:09:16.039664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.821 [2024-05-15 01:09:16.039916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.821 [2024-05-15 01:09:16.039956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.821 [2024-05-15 01:09:16.039975] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.821 [2024-05-15 01:09:16.040209] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.821 [2024-05-15 01:09:16.040468] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.821 [2024-05-15 01:09:16.040491] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.821 [2024-05-15 01:09:16.040507] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.821 [2024-05-15 01:09:16.044157] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.821 [2024-05-15 01:09:16.053215] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.821 [2024-05-15 01:09:16.053682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.821 [2024-05-15 01:09:16.053899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.821 [2024-05-15 01:09:16.053925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.821 [2024-05-15 01:09:16.053949] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.821 [2024-05-15 01:09:16.054191] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.821 [2024-05-15 01:09:16.054451] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.821 [2024-05-15 01:09:16.054475] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.821 [2024-05-15 01:09:16.054491] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.821 [2024-05-15 01:09:16.055536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:03.821 [2024-05-15 01:09:16.058083] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.821 [2024-05-15 01:09:16.067223] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.821 [2024-05-15 01:09:16.067888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.821 [2024-05-15 01:09:16.068167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.821 [2024-05-15 01:09:16.068194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.821 [2024-05-15 01:09:16.068222] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.821 [2024-05-15 01:09:16.068487] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.821 [2024-05-15 01:09:16.068736] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.821 [2024-05-15 01:09:16.068774] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.821 [2024-05-15 01:09:16.068793] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.821 [2024-05-15 01:09:16.072398] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.822 [2024-05-15 01:09:16.081244] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.822 [2024-05-15 01:09:16.081737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.822 [2024-05-15 01:09:16.082013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.822 [2024-05-15 01:09:16.082040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.822 [2024-05-15 01:09:16.082055] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.822 [2024-05-15 01:09:16.082321] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.822 [2024-05-15 01:09:16.082568] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.822 [2024-05-15 01:09:16.082593] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.822 [2024-05-15 01:09:16.082608] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.822 [2024-05-15 01:09:16.086188] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.822 [2024-05-15 01:09:16.095246] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.822 [2024-05-15 01:09:16.095723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.822 [2024-05-15 01:09:16.095944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.822 [2024-05-15 01:09:16.095971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.822 [2024-05-15 01:09:16.095987] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.822 [2024-05-15 01:09:16.096221] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.822 [2024-05-15 01:09:16.096468] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.822 [2024-05-15 01:09:16.096492] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.822 [2024-05-15 01:09:16.096507] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.822 [2024-05-15 01:09:16.100125] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.822 [2024-05-15 01:09:16.109258] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.822 [2024-05-15 01:09:16.109747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.822 [2024-05-15 01:09:16.109938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.822 [2024-05-15 01:09:16.109981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.822 [2024-05-15 01:09:16.109997] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.822 [2024-05-15 01:09:16.110243] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.822 [2024-05-15 01:09:16.110499] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.822 [2024-05-15 01:09:16.110522] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.822 [2024-05-15 01:09:16.110549] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.822 [2024-05-15 01:09:16.114117] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.822 [2024-05-15 01:09:16.123199] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.822 [2024-05-15 01:09:16.123841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.822 [2024-05-15 01:09:16.124057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.822 [2024-05-15 01:09:16.124086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.822 [2024-05-15 01:09:16.124104] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.822 [2024-05-15 01:09:16.124366] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.822 [2024-05-15 01:09:16.124615] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.822 [2024-05-15 01:09:16.124640] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.822 [2024-05-15 01:09:16.124658] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.822 [2024-05-15 01:09:16.128246] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.822 [2024-05-15 01:09:16.137106] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.822 [2024-05-15 01:09:16.137588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.822 [2024-05-15 01:09:16.137811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.822 [2024-05-15 01:09:16.137837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.822 [2024-05-15 01:09:16.137852] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.822 [2024-05-15 01:09:16.138094] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.822 [2024-05-15 01:09:16.138344] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.822 [2024-05-15 01:09:16.138368] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.822 [2024-05-15 01:09:16.138383] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.822 [2024-05-15 01:09:16.142011] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.822 [2024-05-15 01:09:16.151079] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.822 [2024-05-15 01:09:16.151565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.822 [2024-05-15 01:09:16.151822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.822 [2024-05-15 01:09:16.151850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.822 [2024-05-15 01:09:16.151867] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.822 [2024-05-15 01:09:16.152118] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.822 [2024-05-15 01:09:16.152371] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.822 [2024-05-15 01:09:16.152395] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.822 [2024-05-15 01:09:16.152410] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.822 [2024-05-15 01:09:16.156074] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.822 [2024-05-15 01:09:16.165151] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.822 [2024-05-15 01:09:16.165662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.822 [2024-05-15 01:09:16.165853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.822 [2024-05-15 01:09:16.165881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.822 [2024-05-15 01:09:16.165898] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.822 [2024-05-15 01:09:16.166151] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.822 [2024-05-15 01:09:16.166422] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.822 [2024-05-15 01:09:16.166447] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.822 [2024-05-15 01:09:16.166462] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.822 [2024-05-15 01:09:16.170158] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.822 [2024-05-15 01:09:16.174009] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.822 [2024-05-15 01:09:16.174042] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.822 [2024-05-15 01:09:16.174057] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.822 [2024-05-15 01:09:16.174068] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.822 [2024-05-15 01:09:16.174079] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.822 [2024-05-15 01:09:16.174141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.822 [2024-05-15 01:09:16.174164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:03.822 [2024-05-15 01:09:16.174167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.822 [2024-05-15 01:09:16.178857] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.822 [2024-05-15 01:09:16.179367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.823 [2024-05-15 01:09:16.179564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.823 [2024-05-15 01:09:16.179592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.823 [2024-05-15 01:09:16.179609] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.823 [2024-05-15 01:09:16.179849] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.823 [2024-05-15 01:09:16.180095] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.823 [2024-05-15 01:09:16.180118] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.823 [2024-05-15 01:09:16.180133] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.823 [2024-05-15 01:09:16.183471] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.823 [2024-05-15 01:09:16.192442] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.823 [2024-05-15 01:09:16.193074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.823 [2024-05-15 01:09:16.193300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.823 [2024-05-15 01:09:16.193327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.823 [2024-05-15 01:09:16.193353] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.823 [2024-05-15 01:09:16.193594] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.823 [2024-05-15 01:09:16.193832] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.823 [2024-05-15 01:09:16.193863] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.823 [2024-05-15 01:09:16.193878] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.823 [2024-05-15 01:09:16.197419] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:03.823 [2024-05-15 01:09:16.206102] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.823 [2024-05-15 01:09:16.206719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.823 [2024-05-15 01:09:16.206943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.823 [2024-05-15 01:09:16.206969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:03.823 [2024-05-15 01:09:16.206988] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:03.823 [2024-05-15 01:09:16.207213] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:03.823 [2024-05-15 01:09:16.207465] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:03.823 [2024-05-15 01:09:16.207487] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:03.823 [2024-05-15 01:09:16.207503] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:03.823 [2024-05-15 01:09:16.210908] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.084 [2024-05-15 01:09:16.220101] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.084 [2024-05-15 01:09:16.220641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.084 [2024-05-15 01:09:16.220850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.084 [2024-05-15 01:09:16.220876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.084 [2024-05-15 01:09:16.220894] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.084 [2024-05-15 01:09:16.221126] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.084 [2024-05-15 01:09:16.221377] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.084 [2024-05-15 01:09:16.221398] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.084 [2024-05-15 01:09:16.221413] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.084 [2024-05-15 01:09:16.224786] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.084 [2024-05-15 01:09:16.233735] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.084 [2024-05-15 01:09:16.234305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.084 [2024-05-15 01:09:16.234540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.085 [2024-05-15 01:09:16.234568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.085 [2024-05-15 01:09:16.234586] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.085 [2024-05-15 01:09:16.234833] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.085 [2024-05-15 01:09:16.235088] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.085 [2024-05-15 01:09:16.235111] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.085 [2024-05-15 01:09:16.235127] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.085 [2024-05-15 01:09:16.238474] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.085 [2024-05-15 01:09:16.247279] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.085 [2024-05-15 01:09:16.247876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.085 [2024-05-15 01:09:16.248060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.085 [2024-05-15 01:09:16.248094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.085 [2024-05-15 01:09:16.248112] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.085 [2024-05-15 01:09:16.248362] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.085 [2024-05-15 01:09:16.248591] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.085 [2024-05-15 01:09:16.248612] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.085 [2024-05-15 01:09:16.248628] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.085 [2024-05-15 01:09:16.251884] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.085 [2024-05-15 01:09:16.260899] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.085 [2024-05-15 01:09:16.261429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.085 [2024-05-15 01:09:16.261606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.085 [2024-05-15 01:09:16.261632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.085 [2024-05-15 01:09:16.261648] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.085 [2024-05-15 01:09:16.261883] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.085 [2024-05-15 01:09:16.262128] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.085 [2024-05-15 01:09:16.262151] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.085 [2024-05-15 01:09:16.262166] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.085 [2024-05-15 01:09:16.265470] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.085 [2024-05-15 01:09:16.274570] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.085 [2024-05-15 01:09:16.275029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.085 [2024-05-15 01:09:16.275219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.085 [2024-05-15 01:09:16.275245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.085 [2024-05-15 01:09:16.275269] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.085 [2024-05-15 01:09:16.275502] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.085 [2024-05-15 01:09:16.275726] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.085 [2024-05-15 01:09:16.275757] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.085 [2024-05-15 01:09:16.275770] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.085 [2024-05-15 01:09:16.279065] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.085 [2024-05-15 01:09:16.288243] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.085 [2024-05-15 01:09:16.288723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.085 [2024-05-15 01:09:16.288913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.085 [2024-05-15 01:09:16.288945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.085 [2024-05-15 01:09:16.288963] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.085 [2024-05-15 01:09:16.289187] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.085 [2024-05-15 01:09:16.289418] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.085 [2024-05-15 01:09:16.289439] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.085 [2024-05-15 01:09:16.289452] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.085 [2024-05-15 01:09:16.292706] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.085 [2024-05-15 01:09:16.301745] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.085 [2024-05-15 01:09:16.302211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.085 [2024-05-15 01:09:16.302440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.085 [2024-05-15 01:09:16.302464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.085 [2024-05-15 01:09:16.302480] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.085 [2024-05-15 01:09:16.302697] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.085 [2024-05-15 01:09:16.302951] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.085 [2024-05-15 01:09:16.302973] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.085 [2024-05-15 01:09:16.302987] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.085 [2024-05-15 01:09:16.306216] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.085 [2024-05-15 01:09:16.315401] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.085 [2024-05-15 01:09:16.315852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.085 [2024-05-15 01:09:16.316023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.085 [2024-05-15 01:09:16.316050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.085 [2024-05-15 01:09:16.316065] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.085 [2024-05-15 01:09:16.316297] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.085 [2024-05-15 01:09:16.316511] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.085 [2024-05-15 01:09:16.316537] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.085 [2024-05-15 01:09:16.316551] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.085 [2024-05-15 01:09:16.319816] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.085 [2024-05-15 01:09:16.328996] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.085 [2024-05-15 01:09:16.329457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.085 [2024-05-15 01:09:16.329624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.085 [2024-05-15 01:09:16.329649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.085 [2024-05-15 01:09:16.329664] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.085 [2024-05-15 01:09:16.329881] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.085 [2024-05-15 01:09:16.330142] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.085 [2024-05-15 01:09:16.330164] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.085 [2024-05-15 01:09:16.330177] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.085 [2024-05-15 01:09:16.333409] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.085 [2024-05-15 01:09:16.342656] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.085 [2024-05-15 01:09:16.343117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.085 [2024-05-15 01:09:16.343339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.085 [2024-05-15 01:09:16.343364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.085 [2024-05-15 01:09:16.343379] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.085 [2024-05-15 01:09:16.343609] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.085 [2024-05-15 01:09:16.343823] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.085 [2024-05-15 01:09:16.343844] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.085 [2024-05-15 01:09:16.343857] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.085 [2024-05-15 01:09:16.347207] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.085 [2024-05-15 01:09:16.356157] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.085 [2024-05-15 01:09:16.356589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.085 [2024-05-15 01:09:16.356807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.085 [2024-05-15 01:09:16.356832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.085 [2024-05-15 01:09:16.356848] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.085 [2024-05-15 01:09:16.357074] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.085 [2024-05-15 01:09:16.357309] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.085 [2024-05-15 01:09:16.357330] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.085 [2024-05-15 01:09:16.357348] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.085 [2024-05-15 01:09:16.360598] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.086 [2024-05-15 01:09:16.369686] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.086 [2024-05-15 01:09:16.370121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.086 [2024-05-15 01:09:16.370297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.086 [2024-05-15 01:09:16.370325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.086 [2024-05-15 01:09:16.370340] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.086 [2024-05-15 01:09:16.370570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.086 [2024-05-15 01:09:16.370785] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.086 [2024-05-15 01:09:16.370805] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.086 [2024-05-15 01:09:16.370818] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.086 [2024-05-15 01:09:16.374048] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.086 [2024-05-15 01:09:16.383293] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.086 [2024-05-15 01:09:16.383745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.086 [2024-05-15 01:09:16.383928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.086 [2024-05-15 01:09:16.383960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.086 [2024-05-15 01:09:16.383975] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.086 [2024-05-15 01:09:16.384192] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.086 [2024-05-15 01:09:16.384424] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.086 [2024-05-15 01:09:16.384445] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.086 [2024-05-15 01:09:16.384458] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.086 [2024-05-15 01:09:16.387713] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.086 [2024-05-15 01:09:16.396938] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.086 [2024-05-15 01:09:16.397391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.086 [2024-05-15 01:09:16.397575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.086 [2024-05-15 01:09:16.397601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.086 [2024-05-15 01:09:16.397616] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.086 [2024-05-15 01:09:16.397833] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.086 [2024-05-15 01:09:16.398094] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.086 [2024-05-15 01:09:16.398116] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.086 [2024-05-15 01:09:16.398129] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.086 [2024-05-15 01:09:16.401406] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.086 [2024-05-15 01:09:16.410529] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.086 [2024-05-15 01:09:16.410938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.086 [2024-05-15 01:09:16.411106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.086 [2024-05-15 01:09:16.411131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.086 [2024-05-15 01:09:16.411147] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.086 [2024-05-15 01:09:16.411376] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.086 [2024-05-15 01:09:16.411590] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.086 [2024-05-15 01:09:16.411610] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.086 [2024-05-15 01:09:16.411623] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.086 [2024-05-15 01:09:16.414837] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.086 [2024-05-15 01:09:16.424189] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.086 [2024-05-15 01:09:16.424618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.086 [2024-05-15 01:09:16.424840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.086 [2024-05-15 01:09:16.424866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.086 [2024-05-15 01:09:16.424881] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.086 [2024-05-15 01:09:16.425106] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.086 [2024-05-15 01:09:16.425341] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.086 [2024-05-15 01:09:16.425361] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.086 [2024-05-15 01:09:16.425374] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.086 [2024-05-15 01:09:16.428611] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.086 [2024-05-15 01:09:16.437794] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.086 [2024-05-15 01:09:16.438217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.086 [2024-05-15 01:09:16.438401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.086 [2024-05-15 01:09:16.438426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.086 [2024-05-15 01:09:16.438441] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.086 [2024-05-15 01:09:16.438658] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.086 [2024-05-15 01:09:16.438889] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.086 [2024-05-15 01:09:16.438909] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.086 [2024-05-15 01:09:16.438946] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.086 [2024-05-15 01:09:16.442209] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.086 [2024-05-15 01:09:16.451381] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.086 [2024-05-15 01:09:16.451808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.086 [2024-05-15 01:09:16.451996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.086 [2024-05-15 01:09:16.452023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.086 [2024-05-15 01:09:16.452038] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.086 [2024-05-15 01:09:16.452255] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.086 [2024-05-15 01:09:16.452485] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.086 [2024-05-15 01:09:16.452506] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.086 [2024-05-15 01:09:16.452519] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.086 [2024-05-15 01:09:16.455760] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.086 [2024-05-15 01:09:16.464898] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.086 [2024-05-15 01:09:16.465326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.086 [2024-05-15 01:09:16.465521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.086 [2024-05-15 01:09:16.465546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.086 [2024-05-15 01:09:16.465561] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.086 [2024-05-15 01:09:16.465778] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.086 [2024-05-15 01:09:16.466009] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.086 [2024-05-15 01:09:16.466031] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.086 [2024-05-15 01:09:16.466044] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.086 [2024-05-15 01:09:16.469477] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.346 [2024-05-15 01:09:16.478631] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.346 [2024-05-15 01:09:16.479047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.346 [2024-05-15 01:09:16.479241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.346 [2024-05-15 01:09:16.479267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.346 [2024-05-15 01:09:16.479282] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.346 [2024-05-15 01:09:16.479504] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.346 [2024-05-15 01:09:16.479726] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.346 [2024-05-15 01:09:16.479747] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.346 [2024-05-15 01:09:16.479761] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.346 [2024-05-15 01:09:16.483191] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.346 [2024-05-15 01:09:16.492291] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.346 [2024-05-15 01:09:16.492712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.346 [2024-05-15 01:09:16.492905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.346 [2024-05-15 01:09:16.492940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.346 [2024-05-15 01:09:16.492959] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.347 [2024-05-15 01:09:16.493178] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.347 [2024-05-15 01:09:16.493408] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.347 [2024-05-15 01:09:16.493430] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.347 [2024-05-15 01:09:16.493444] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.347 [2024-05-15 01:09:16.496681] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.347 [2024-05-15 01:09:16.505859] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.347 [2024-05-15 01:09:16.506320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.347 [2024-05-15 01:09:16.506533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.347 [2024-05-15 01:09:16.506559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.347 [2024-05-15 01:09:16.506574] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.347 [2024-05-15 01:09:16.506791] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.347 [2024-05-15 01:09:16.507050] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.347 [2024-05-15 01:09:16.507072] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.347 [2024-05-15 01:09:16.507085] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.347 [2024-05-15 01:09:16.510341] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.347 [2024-05-15 01:09:16.519462] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.347 [2024-05-15 01:09:16.519902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.347 [2024-05-15 01:09:16.520101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.347 [2024-05-15 01:09:16.520128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.347 [2024-05-15 01:09:16.520143] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.347 [2024-05-15 01:09:16.520373] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.347 [2024-05-15 01:09:16.520587] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.347 [2024-05-15 01:09:16.520607] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.347 [2024-05-15 01:09:16.520620] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.347 [2024-05-15 01:09:16.523844] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.347 [2024-05-15 01:09:16.533031] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.347 [2024-05-15 01:09:16.533469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.347 [2024-05-15 01:09:16.533633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.347 [2024-05-15 01:09:16.533663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.347 [2024-05-15 01:09:16.533678] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.347 [2024-05-15 01:09:16.533908] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.347 [2024-05-15 01:09:16.534152] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.347 [2024-05-15 01:09:16.534174] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.347 [2024-05-15 01:09:16.534188] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.347 [2024-05-15 01:09:16.537486] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.347 [2024-05-15 01:09:16.546668] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.347 [2024-05-15 01:09:16.547127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.347 [2024-05-15 01:09:16.547321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.347 [2024-05-15 01:09:16.547346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.347 [2024-05-15 01:09:16.547361] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.347 [2024-05-15 01:09:16.547591] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.347 [2024-05-15 01:09:16.547805] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.347 [2024-05-15 01:09:16.547825] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.347 [2024-05-15 01:09:16.547838] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.347 [2024-05-15 01:09:16.551119] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.347 [2024-05-15 01:09:16.560296] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.347 [2024-05-15 01:09:16.560717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.347 [2024-05-15 01:09:16.560913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.347 [2024-05-15 01:09:16.560944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.347 [2024-05-15 01:09:16.560961] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.347 [2024-05-15 01:09:16.561178] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.347 [2024-05-15 01:09:16.561411] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.347 [2024-05-15 01:09:16.561432] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.347 [2024-05-15 01:09:16.561445] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.347 [2024-05-15 01:09:16.564660] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.347 [2024-05-15 01:09:16.573843] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.347 [2024-05-15 01:09:16.574256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.347 [2024-05-15 01:09:16.574440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.347 [2024-05-15 01:09:16.574465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.347 [2024-05-15 01:09:16.574485] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.347 [2024-05-15 01:09:16.574703] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.347 [2024-05-15 01:09:16.574958] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.347 [2024-05-15 01:09:16.574980] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.347 [2024-05-15 01:09:16.574993] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.347 [2024-05-15 01:09:16.578237] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.347 [2024-05-15 01:09:16.587345] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.347 [2024-05-15 01:09:16.587774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.347 [2024-05-15 01:09:16.587961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.347 [2024-05-15 01:09:16.587987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.347 [2024-05-15 01:09:16.588002] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.348 [2024-05-15 01:09:16.588220] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.348 [2024-05-15 01:09:16.588450] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.348 [2024-05-15 01:09:16.588471] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.348 [2024-05-15 01:09:16.588483] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.348 [2024-05-15 01:09:16.591738] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.348 [2024-05-15 01:09:16.600879] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.348 [2024-05-15 01:09:16.601296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.348 [2024-05-15 01:09:16.601459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.348 [2024-05-15 01:09:16.601484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.348 [2024-05-15 01:09:16.601499] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.348 [2024-05-15 01:09:16.601716] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.348 [2024-05-15 01:09:16.601971] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.348 [2024-05-15 01:09:16.601993] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.348 [2024-05-15 01:09:16.602006] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.348 [2024-05-15 01:09:16.605248] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.348 [2024-05-15 01:09:16.614400] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.348 [2024-05-15 01:09:16.614842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.348 [2024-05-15 01:09:16.615025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.348 [2024-05-15 01:09:16.615051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.348 [2024-05-15 01:09:16.615067] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.348 [2024-05-15 01:09:16.615304] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.348 [2024-05-15 01:09:16.615519] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.348 [2024-05-15 01:09:16.615539] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.348 [2024-05-15 01:09:16.615552] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.348 [2024-05-15 01:09:16.618792] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.348 [2024-05-15 01:09:16.628008] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.348 [2024-05-15 01:09:16.628454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.348 [2024-05-15 01:09:16.628618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.348 [2024-05-15 01:09:16.628643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.348 [2024-05-15 01:09:16.628659] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.348 [2024-05-15 01:09:16.628877] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.348 [2024-05-15 01:09:16.629137] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.348 [2024-05-15 01:09:16.629159] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.348 [2024-05-15 01:09:16.629173] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.348 [2024-05-15 01:09:16.632404] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.348 [2024-05-15 01:09:16.641629] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.348 [2024-05-15 01:09:16.642052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.348 [2024-05-15 01:09:16.642269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.348 [2024-05-15 01:09:16.642294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.348 [2024-05-15 01:09:16.642309] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.348 [2024-05-15 01:09:16.642527] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.348 [2024-05-15 01:09:16.642757] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.348 [2024-05-15 01:09:16.642777] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.348 [2024-05-15 01:09:16.642790] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.348 [2024-05-15 01:09:16.646052] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.348 [2024-05-15 01:09:16.655189] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.348 [2024-05-15 01:09:16.655638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.348 [2024-05-15 01:09:16.655826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.348 [2024-05-15 01:09:16.655851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.348 [2024-05-15 01:09:16.655866] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.348 [2024-05-15 01:09:16.656091] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.348 [2024-05-15 01:09:16.656329] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.348 [2024-05-15 01:09:16.656350] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.348 [2024-05-15 01:09:16.656363] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.348 [2024-05-15 01:09:16.659639] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.348 [2024-05-15 01:09:16.668817] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.348 [2024-05-15 01:09:16.669269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.348 [2024-05-15 01:09:16.669460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.348 [2024-05-15 01:09:16.669485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.348 [2024-05-15 01:09:16.669500] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.348 [2024-05-15 01:09:16.669718] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.348 [2024-05-15 01:09:16.669974] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.348 [2024-05-15 01:09:16.669996] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.348 [2024-05-15 01:09:16.670009] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.348 [2024-05-15 01:09:16.673275] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.348 [2024-05-15 01:09:16.682588] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.348 [2024-05-15 01:09:16.683011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.348 [2024-05-15 01:09:16.683205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.348 [2024-05-15 01:09:16.683230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.348 [2024-05-15 01:09:16.683245] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.349 [2024-05-15 01:09:16.683464] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.349 [2024-05-15 01:09:16.683693] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.349 [2024-05-15 01:09:16.683713] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.349 [2024-05-15 01:09:16.683726] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.349 [2024-05-15 01:09:16.687011] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.349 [2024-05-15 01:09:16.696188] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.349 [2024-05-15 01:09:16.696645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.349 [2024-05-15 01:09:16.696847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.349 [2024-05-15 01:09:16.696872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.349 [2024-05-15 01:09:16.696887] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.349 [2024-05-15 01:09:16.697114] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.349 [2024-05-15 01:09:16.697347] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.349 [2024-05-15 01:09:16.697373] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.349 [2024-05-15 01:09:16.697387] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.349 [2024-05-15 01:09:16.700612] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.349 [2024-05-15 01:09:16.709750] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.349 [2024-05-15 01:09:16.710182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.349 [2024-05-15 01:09:16.710344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.349 [2024-05-15 01:09:16.710369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.349 [2024-05-15 01:09:16.710383] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.349 [2024-05-15 01:09:16.710613] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.349 [2024-05-15 01:09:16.710827] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.349 [2024-05-15 01:09:16.710847] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.349 [2024-05-15 01:09:16.710860] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.349 [2024-05-15 01:09:16.714115] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.349 [2024-05-15 01:09:16.723259] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.349 [2024-05-15 01:09:16.723688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.349 [2024-05-15 01:09:16.723858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.349 [2024-05-15 01:09:16.723883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.349 [2024-05-15 01:09:16.723898] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.349 [2024-05-15 01:09:16.724124] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.349 [2024-05-15 01:09:16.724354] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.349 [2024-05-15 01:09:16.724376] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.349 [2024-05-15 01:09:16.724389] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.349 [2024-05-15 01:09:16.727851] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.349 [2024-05-15 01:09:16.736938] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.349 [2024-05-15 01:09:16.737371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.349 [2024-05-15 01:09:16.737533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.349 [2024-05-15 01:09:16.737558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.349 [2024-05-15 01:09:16.737573] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.349 [2024-05-15 01:09:16.737791] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.349 [2024-05-15 01:09:16.738050] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.349 [2024-05-15 01:09:16.738072] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.349 [2024-05-15 01:09:16.738091] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.609 [2024-05-15 01:09:16.741423] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.609 [2024-05-15 01:09:16.750621] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.609 [2024-05-15 01:09:16.751021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.609 [2024-05-15 01:09:16.751194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.609 [2024-05-15 01:09:16.751221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.609 [2024-05-15 01:09:16.751237] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.609 [2024-05-15 01:09:16.751470] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.609 [2024-05-15 01:09:16.751685] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.609 [2024-05-15 01:09:16.751705] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.609 [2024-05-15 01:09:16.751718] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.609 [2024-05-15 01:09:16.755001] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.609 [2024-05-15 01:09:16.764123] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.609 [2024-05-15 01:09:16.764557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.609 [2024-05-15 01:09:16.764748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.609 [2024-05-15 01:09:16.764773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.609 [2024-05-15 01:09:16.764788] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.609 [2024-05-15 01:09:16.765013] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.609 [2024-05-15 01:09:16.765249] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.609 [2024-05-15 01:09:16.765270] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.609 [2024-05-15 01:09:16.765283] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.609 [2024-05-15 01:09:16.768575] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.609 [2024-05-15 01:09:16.777736] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.609 [2024-05-15 01:09:16.778192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.609 [2024-05-15 01:09:16.778358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.609 [2024-05-15 01:09:16.778383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.609 [2024-05-15 01:09:16.778399] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.609 [2024-05-15 01:09:16.778616] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.609 [2024-05-15 01:09:16.778846] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.609 [2024-05-15 01:09:16.778866] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.609 [2024-05-15 01:09:16.778879] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.609 [2024-05-15 01:09:16.782226] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.609 [2024-05-15 01:09:16.791276] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.609 [2024-05-15 01:09:16.791719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.609 [2024-05-15 01:09:16.791903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.609 [2024-05-15 01:09:16.791936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.609 [2024-05-15 01:09:16.791954] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.609 [2024-05-15 01:09:16.792171] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.609 [2024-05-15 01:09:16.792402] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.609 [2024-05-15 01:09:16.792423] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.609 [2024-05-15 01:09:16.792436] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.609 [2024-05-15 01:09:16.795770] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.609 [2024-05-15 01:09:16.804928] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.609 [2024-05-15 01:09:16.805385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.609 [2024-05-15 01:09:16.805575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.609 [2024-05-15 01:09:16.805600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.609 [2024-05-15 01:09:16.805616] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.609 [2024-05-15 01:09:16.805832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.609 [2024-05-15 01:09:16.806093] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.609 [2024-05-15 01:09:16.806116] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.609 [2024-05-15 01:09:16.806129] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.609 [2024-05-15 01:09:16.809480] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.609 [2024-05-15 01:09:16.818536] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.609 [2024-05-15 01:09:16.818984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.609 [2024-05-15 01:09:16.819173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.609 [2024-05-15 01:09:16.819198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.609 [2024-05-15 01:09:16.819213] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.609 [2024-05-15 01:09:16.819430] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.609 [2024-05-15 01:09:16.819661] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.609 [2024-05-15 01:09:16.819681] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.609 [2024-05-15 01:09:16.819695] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.609 [2024-05-15 01:09:16.822971] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.609 [2024-05-15 01:09:16.832079] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.609 [2024-05-15 01:09:16.832526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.609 [2024-05-15 01:09:16.832684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.609 [2024-05-15 01:09:16.832709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.609 [2024-05-15 01:09:16.832725] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.609 [2024-05-15 01:09:16.832951] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.610 [2024-05-15 01:09:16.833174] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.610 [2024-05-15 01:09:16.833195] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.610 [2024-05-15 01:09:16.833208] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.610 [2024-05-15 01:09:16.836536] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.610 [2024-05-15 01:09:16.845655] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.610 [2024-05-15 01:09:16.846095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.610 [2024-05-15 01:09:16.846278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.610 [2024-05-15 01:09:16.846303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.610 [2024-05-15 01:09:16.846319] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.610 [2024-05-15 01:09:16.846549] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.610 [2024-05-15 01:09:16.846763] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.610 [2024-05-15 01:09:16.846783] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.610 [2024-05-15 01:09:16.846796] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.610 [2024-05-15 01:09:16.850078] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.610 [2024-05-15 01:09:16.859202] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.610 [2024-05-15 01:09:16.859610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.610 [2024-05-15 01:09:16.859823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.610 [2024-05-15 01:09:16.859848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.610 [2024-05-15 01:09:16.859863] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.610 [2024-05-15 01:09:16.860090] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.610 [2024-05-15 01:09:16.860325] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.610 [2024-05-15 01:09:16.860346] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.610 [2024-05-15 01:09:16.860359] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.610 [2024-05-15 01:09:16.863570] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.610 [2024-05-15 01:09:16.872715] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.610 [2024-05-15 01:09:16.873153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.610 [2024-05-15 01:09:16.873346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.610 [2024-05-15 01:09:16.873372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.610 [2024-05-15 01:09:16.873388] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.610 [2024-05-15 01:09:16.873604] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.610 [2024-05-15 01:09:16.873835] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.610 [2024-05-15 01:09:16.873856] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.610 [2024-05-15 01:09:16.873868] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.610 [2024-05-15 01:09:16.877146] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.610 [2024-05-15 01:09:16.886299] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.610 [2024-05-15 01:09:16.886723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.610 [2024-05-15 01:09:16.886885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.610 [2024-05-15 01:09:16.886910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.610 [2024-05-15 01:09:16.886926] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.610 [2024-05-15 01:09:16.887152] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.610 [2024-05-15 01:09:16.887385] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.610 [2024-05-15 01:09:16.887406] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.610 [2024-05-15 01:09:16.887420] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.610 [2024-05-15 01:09:16.890643] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.610 [2024-05-15 01:09:16.899791] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.610 [2024-05-15 01:09:16.900262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.610 [2024-05-15 01:09:16.900452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.610 [2024-05-15 01:09:16.900477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.610 [2024-05-15 01:09:16.900493] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.610 [2024-05-15 01:09:16.900723] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.610 [2024-05-15 01:09:16.900963] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.610 [2024-05-15 01:09:16.900985] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.610 [2024-05-15 01:09:16.900999] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.610 [2024-05-15 01:09:16.904260] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.610 [2024-05-15 01:09:16.913420] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.610 [2024-05-15 01:09:16.913864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.610 [2024-05-15 01:09:16.914051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.610 [2024-05-15 01:09:16.914084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.610 [2024-05-15 01:09:16.914101] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.610 [2024-05-15 01:09:16.914333] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.610 [2024-05-15 01:09:16.914548] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.610 [2024-05-15 01:09:16.914569] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.610 [2024-05-15 01:09:16.914582] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.610 [2024-05-15 01:09:16.917803] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.610 [2024-05-15 01:09:16.926982] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.610 [2024-05-15 01:09:16.927438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.610 [2024-05-15 01:09:16.927626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.610 [2024-05-15 01:09:16.927652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.610 [2024-05-15 01:09:16.927667] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.610 [2024-05-15 01:09:16.927896] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.610 [2024-05-15 01:09:16.928142] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.610 [2024-05-15 01:09:16.928164] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.610 [2024-05-15 01:09:16.928178] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.610 [2024-05-15 01:09:16.931407] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.610 [2024-05-15 01:09:16.940617] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.610 [2024-05-15 01:09:16.941037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.610 [2024-05-15 01:09:16.941225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.610 [2024-05-15 01:09:16.941250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.610 [2024-05-15 01:09:16.941265] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.610 [2024-05-15 01:09:16.941482] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.610 [2024-05-15 01:09:16.941703] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.610 [2024-05-15 01:09:16.941724] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.610 [2024-05-15 01:09:16.941737] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.610 [2024-05-15 01:09:16.945044] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.610 01:09:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:04.611 01:09:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:22:04.611 01:09:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:04.611 01:09:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:04.611 01:09:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:04.611 [2024-05-15 01:09:16.954211] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.611 [2024-05-15 01:09:16.954636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.611 [2024-05-15 01:09:16.954799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.611 [2024-05-15 01:09:16.954824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.611 [2024-05-15 01:09:16.954839] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.611 [2024-05-15 01:09:16.955065] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.611 [2024-05-15 01:09:16.955287] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.611 [2024-05-15 01:09:16.955322] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.611 [2024-05-15 01:09:16.955335] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.611 [2024-05-15 01:09:16.958608] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.611 [2024-05-15 01:09:16.967905] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.611 [2024-05-15 01:09:16.968311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.611 [2024-05-15 01:09:16.968524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.611 [2024-05-15 01:09:16.968550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.611 [2024-05-15 01:09:16.968565] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.611 [2024-05-15 01:09:16.968782] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.611 [2024-05-15 01:09:16.969019] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.611 [2024-05-15 01:09:16.969041] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.611 [2024-05-15 01:09:16.969055] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.611 [2024-05-15 01:09:16.972397] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.611 01:09:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.611 01:09:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:04.611 01:09:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.611 01:09:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:04.611 [2024-05-15 01:09:16.981458] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.611 [2024-05-15 01:09:16.981915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.611 [2024-05-15 01:09:16.982097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.611 [2024-05-15 01:09:16.982124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.611 [2024-05-15 01:09:16.982142] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.611 [2024-05-15 01:09:16.982170] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.611 [2024-05-15 01:09:16.982359] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.611 [2024-05-15 01:09:16.982585] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.611 [2024-05-15 01:09:16.982608] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.611 [2024-05-15 01:09:16.982633] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.611 [2024-05-15 01:09:16.986070] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.611 01:09:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.611 01:09:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:04.611 01:09:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.611 01:09:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:04.611 [2024-05-15 01:09:16.995023] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.611 [2024-05-15 01:09:16.995502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.611 [2024-05-15 01:09:16.995703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.611 [2024-05-15 01:09:16.995729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.611 [2024-05-15 01:09:16.995745] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.611 [2024-05-15 01:09:16.996004] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.611 [2024-05-15 01:09:16.996226] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.611 [2024-05-15 01:09:16.996261] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.611 [2024-05-15 01:09:16.996274] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.611 [2024-05-15 01:09:16.999631] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.869 [2024-05-15 01:09:17.008693] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.869 [2024-05-15 01:09:17.009173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.869 [2024-05-15 01:09:17.009361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.869 [2024-05-15 01:09:17.009386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.869 [2024-05-15 01:09:17.009402] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.869 [2024-05-15 01:09:17.009620] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.869 [2024-05-15 01:09:17.009852] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.869 [2024-05-15 01:09:17.009873] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.870 [2024-05-15 01:09:17.009887] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.870 [2024-05-15 01:09:17.013183] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.870 [2024-05-15 01:09:17.022216] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.870 [2024-05-15 01:09:17.022830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.870 [2024-05-15 01:09:17.023041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.870 [2024-05-15 01:09:17.023068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.870 [2024-05-15 01:09:17.023086] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.870 [2024-05-15 01:09:17.023326] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.870 [2024-05-15 01:09:17.023555] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.870 [2024-05-15 01:09:17.023576] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.870 [2024-05-15 01:09:17.023591] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.870 Malloc0 00:22:04.870 01:09:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.870 01:09:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:04.870 [2024-05-15 01:09:17.026869] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.870 01:09:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.870 01:09:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:04.870 01:09:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.870 01:09:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:04.870 01:09:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.870 01:09:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:04.870 [2024-05-15 01:09:17.036022] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.870 [2024-05-15 01:09:17.036443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.870 [2024-05-15 01:09:17.036649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.870 [2024-05-15 01:09:17.036675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f9990 with addr=10.0.0.2, port=4420 00:22:04.870 [2024-05-15 01:09:17.036690] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f9990 is same with the state(5) to be set 00:22:04.870 [2024-05-15 01:09:17.036945] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f9990 (9): Bad file descriptor 00:22:04.870 [2024-05-15 01:09:17.037168] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:04.870 [2024-05-15 01:09:17.037189] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:04.870 [2024-05-15 01:09:17.037203] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.870 [2024-05-15 01:09:17.040522] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:04.870 01:09:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.870 01:09:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:04.870 01:09:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.870 01:09:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:04.870 [2024-05-15 01:09:17.045829] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:04.870 [2024-05-15 01:09:17.046122] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.870 [2024-05-15 01:09:17.049581] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.870 01:09:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.870 01:09:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1343427 00:22:04.870 [2024-05-15 01:09:17.085196] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:14.854 00:22:14.854 Latency(us) 00:22:14.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.854 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:14.854 Verification LBA range: start 0x0 length 0x4000 00:22:14.854 Nvme1n1 : 15.01 6459.48 25.23 10339.21 0.00 7594.57 837.40 21748.24 00:22:14.855 =================================================================================================================== 00:22:14.855 Total : 6459.48 25.23 10339.21 0.00 7594.57 837.40 21748.24 00:22:14.855 01:09:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:22:14.855 01:09:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:14.855 01:09:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.855 01:09:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:14.855 01:09:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.855 01:09:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:22:14.855 01:09:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:22:14.855 01:09:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:14.855 01:09:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:22:14.855 01:09:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:14.855 01:09:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:22:14.855 01:09:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:14.855 01:09:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:14.855 rmmod nvme_tcp 00:22:14.855 rmmod nvme_fabrics 00:22:14.855 rmmod nvme_keyring 00:22:14.855 01:09:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:14.855 01:09:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:22:14.855 01:09:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:22:14.855 01:09:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1344098 ']' 00:22:14.855 01:09:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1344098 00:22:14.855 01:09:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 1344098 ']' 00:22:14.855 01:09:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 1344098 00:22:14.855 01:09:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:22:14.855 01:09:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:14.855 01:09:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1344098 00:22:14.855 01:09:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:14.855 01:09:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:14.855 01:09:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1344098' 00:22:14.855 killing process with pid 1344098 00:22:14.855 01:09:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 1344098 00:22:14.855 [2024-05-15 01:09:25.918306] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:14.855 01:09:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 1344098 00:22:14.855 01:09:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:14.855 01:09:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:14.855 01:09:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:14.855 01:09:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:14.855 01:09:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:14.855 01:09:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.855 01:09:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:14.855 01:09:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.233 01:09:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:16.233 00:22:16.233 real 0m23.314s 00:22:16.233 user 1m1.206s 00:22:16.233 sys 0m4.802s 00:22:16.233 01:09:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:16.233 01:09:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:16.233 ************************************ 00:22:16.233 END TEST nvmf_bdevperf 00:22:16.233 ************************************ 00:22:16.233 01:09:28 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:22:16.233 01:09:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:16.233 01:09:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:16.233 01:09:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:16.233 ************************************ 00:22:16.233 START TEST nvmf_target_disconnect 00:22:16.233 ************************************ 00:22:16.233 01:09:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:22:16.233 * Looking for test storage... 00:22:16.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:16.233 01:09:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:16.233 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:22:16.233 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestinit 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:22:16.234 01:09:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:18.772 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:18.772 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:18.772 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:18.772 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:18.772 01:09:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:18.772 01:09:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:18.772 01:09:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:18.772 01:09:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:18.772 01:09:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:18.772 01:09:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:18.773 01:09:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:18.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:18.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:22:18.773 00:22:18.773 --- 10.0.0.2 ping statistics --- 00:22:18.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.773 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:22:18.773 01:09:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:18.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:18.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:22:18.773 00:22:18.773 --- 10.0.0.1 ping statistics --- 00:22:18.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.773 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:22:18.773 01:09:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:18.773 01:09:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:22:18.773 01:09:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:18.773 01:09:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:18.773 01:09:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:18.773 01:09:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:18.773 01:09:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:18.773 01:09:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:18.773 01:09:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:18.773 01:09:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:22:18.773 01:09:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:22:18.773 01:09:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:18.773 01:09:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:18.773 ************************************ 00:22:18.773 START TEST nvmf_target_disconnect_tc1 00:22:18.773 ************************************ 00:22:18.773 01:09:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:22:18.773 01:09:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # set +e 00:22:18.773 01:09:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:19.031 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.031 [2024-05-15 01:09:31.225820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:19.031 [2024-05-15 01:09:31.226106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:19.031 [2024-05-15 01:09:31.226138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75ad60 with addr=10.0.0.2, port=4420 00:22:19.031 [2024-05-15 01:09:31.226171] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:19.031 [2024-05-15 01:09:31.226191] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:19.031 [2024-05-15 01:09:31.226204] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:22:19.031 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:22:19.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:22:19.031 Initializing NVMe Controllers 00:22:19.031 01:09:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # trap - ERR 00:22:19.031 01:09:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # print_backtrace 00:22:19.031 01:09:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:22:19.031 01:09:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1149 -- # return 0 00:22:19.031 01:09:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:22:19.031 01:09:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@41 -- # set -e 00:22:19.031 00:22:19.031 real 0m0.108s 00:22:19.031 user 0m0.040s 00:22:19.032 sys 0m0.067s 00:22:19.032 01:09:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:19.032 01:09:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:19.032 ************************************ 00:22:19.032 END TEST nvmf_target_disconnect_tc1 00:22:19.032 ************************************ 00:22:19.032 01:09:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:22:19.032 01:09:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:22:19.032 01:09:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:19.032 01:09:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:19.032 ************************************ 00:22:19.032 START TEST nvmf_target_disconnect_tc2 00:22:19.032 ************************************ 00:22:19.032 01:09:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:22:19.032 01:09:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:22:19.032 01:09:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:22:19.032 01:09:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:19.032 01:09:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:19.032 01:09:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:19.032 01:09:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1347656 00:22:19.032 01:09:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:22:19.032 01:09:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1347656 00:22:19.032 01:09:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1347656 ']' 00:22:19.032 01:09:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.032 01:09:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:19.032 01:09:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.032 01:09:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:19.032 01:09:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:19.032 [2024-05-15 01:09:31.347651] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:22:19.032 [2024-05-15 01:09:31.347738] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.032 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.290 [2024-05-15 01:09:31.425403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:19.290 [2024-05-15 01:09:31.538629] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.290 [2024-05-15 01:09:31.538685] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.290 [2024-05-15 01:09:31.538712] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.290 [2024-05-15 01:09:31.538723] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.290 [2024-05-15 01:09:31.538732] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.290 [2024-05-15 01:09:31.539181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:19.290 [2024-05-15 01:09:31.539233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:19.290 [2024-05-15 01:09:31.539358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:22:19.290 [2024-05-15 01:09:31.539485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:20.223 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:20.224 Malloc0 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:20.224 [2024-05-15 01:09:32.344249] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:20.224 [2024-05-15 01:09:32.372246] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:20.224 [2024-05-15 01:09:32.372541] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # reconnectpid=1347816 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:20.224 01:09:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@52 -- # sleep 2 00:22:20.224 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.129 01:09:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@53 -- # kill -9 1347656 00:22:22.129 01:09:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@55 -- # sleep 2 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Write completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Write completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Write completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Write completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Write completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Write completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Write completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Write completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Write completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Write completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 [2024-05-15 01:09:34.397470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Write completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Write completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Write completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Write completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Write completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Write completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Write completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Write completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Write completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Write completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Write completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Write completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Write completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 [2024-05-15 01:09:34.397802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Write completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Write completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Write completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Write completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Write completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Write completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Write completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.129 Read completed with error (sct=0, sc=8) 00:22:22.129 starting I/O failed 00:22:22.130 Write completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Read completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Write completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Read completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Read completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Read completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Read completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Read completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 [2024-05-15 01:09:34.398159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:22.130 Read completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Read completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Read completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Read completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Read completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Read completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Read completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Read completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Read completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Read completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Read completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Read completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Write completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Read completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Write completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Read completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Read completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Write completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Write completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Write completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Write completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Read completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Read completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Read completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Write completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Write completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Write completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Read completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Read completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Read completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Write completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 Write completed with error (sct=0, sc=8) 00:22:22.130 starting I/O failed 00:22:22.130 [2024-05-15 01:09:34.398507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:22.130 [2024-05-15 01:09:34.398773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.399025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.399053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.130 qpair failed and we were unable to recover it. 00:22:22.130 [2024-05-15 01:09:34.399234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.399446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.399471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.130 qpair failed and we were unable to recover it. 00:22:22.130 [2024-05-15 01:09:34.399670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.399962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.400006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.130 qpair failed and we were unable to recover it. 00:22:22.130 [2024-05-15 01:09:34.400167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.400400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.400424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.130 qpair failed and we were unable to recover it. 00:22:22.130 [2024-05-15 01:09:34.400639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.400948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.400997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.130 qpair failed and we were unable to recover it. 00:22:22.130 [2024-05-15 01:09:34.401167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.401349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.401374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.130 qpair failed and we were unable to recover it. 00:22:22.130 [2024-05-15 01:09:34.401590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.401804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.401850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.130 qpair failed and we were unable to recover it. 00:22:22.130 [2024-05-15 01:09:34.402078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.402245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.402269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.130 qpair failed and we were unable to recover it. 00:22:22.130 [2024-05-15 01:09:34.402466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.402683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.402709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.130 qpair failed and we were unable to recover it. 00:22:22.130 [2024-05-15 01:09:34.402911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.403140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.403165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.130 qpair failed and we were unable to recover it. 00:22:22.130 [2024-05-15 01:09:34.403503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.403722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.403750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.130 qpair failed and we were unable to recover it. 00:22:22.130 [2024-05-15 01:09:34.403964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.404128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.404153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.130 qpair failed and we were unable to recover it. 00:22:22.130 [2024-05-15 01:09:34.404371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.404589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.404617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.130 qpair failed and we were unable to recover it. 00:22:22.130 [2024-05-15 01:09:34.404842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.405061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.405085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.130 qpair failed and we were unable to recover it. 00:22:22.130 [2024-05-15 01:09:34.405275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.405460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.405483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.130 qpair failed and we were unable to recover it. 00:22:22.130 [2024-05-15 01:09:34.405685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.405947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.405973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.130 qpair failed and we were unable to recover it. 00:22:22.130 [2024-05-15 01:09:34.406145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.406368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.406395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.130 qpair failed and we were unable to recover it. 00:22:22.130 [2024-05-15 01:09:34.406612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.406830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.406858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.130 qpair failed and we were unable to recover it. 00:22:22.130 [2024-05-15 01:09:34.407080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.407254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.130 [2024-05-15 01:09:34.407278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.130 qpair failed and we were unable to recover it. 00:22:22.130 [2024-05-15 01:09:34.407460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.407694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.407720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.131 qpair failed and we were unable to recover it. 00:22:22.131 [2024-05-15 01:09:34.407949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.408130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.408155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.131 qpair failed and we were unable to recover it. 00:22:22.131 [2024-05-15 01:09:34.408371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.408624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.408650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.131 qpair failed and we were unable to recover it. 00:22:22.131 [2024-05-15 01:09:34.408842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.409056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.409081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.131 qpair failed and we were unable to recover it. 00:22:22.131 [2024-05-15 01:09:34.409237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.409469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.409509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.131 qpair failed and we were unable to recover it. 00:22:22.131 [2024-05-15 01:09:34.409758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.409986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.410012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.131 qpair failed and we were unable to recover it. 00:22:22.131 [2024-05-15 01:09:34.410221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.410435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.410460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.131 qpair failed and we were unable to recover it. 00:22:22.131 [2024-05-15 01:09:34.410789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.411007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.411032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.131 qpair failed and we were unable to recover it. 00:22:22.131 [2024-05-15 01:09:34.411191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.411376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.411400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.131 qpair failed and we were unable to recover it. 00:22:22.131 [2024-05-15 01:09:34.411680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.411949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.412001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.131 qpair failed and we were unable to recover it. 00:22:22.131 [2024-05-15 01:09:34.412174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.412383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.412410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.131 qpair failed and we were unable to recover it. 00:22:22.131 [2024-05-15 01:09:34.412673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.412916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.412954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.131 qpair failed and we were unable to recover it. 00:22:22.131 [2024-05-15 01:09:34.413125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.413339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.413364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.131 qpair failed and we were unable to recover it. 00:22:22.131 [2024-05-15 01:09:34.413568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.413794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.413818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.131 qpair failed and we were unable to recover it. 00:22:22.131 [2024-05-15 01:09:34.414042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.414200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.414225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.131 qpair failed and we were unable to recover it. 00:22:22.131 [2024-05-15 01:09:34.414449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.414675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.414700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.131 qpair failed and we were unable to recover it. 00:22:22.131 [2024-05-15 01:09:34.414859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.415227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.415254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.131 qpair failed and we were unable to recover it. 00:22:22.131 [2024-05-15 01:09:34.415473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.415642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.415683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.131 qpair failed and we were unable to recover it. 00:22:22.131 [2024-05-15 01:09:34.415900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.416106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.416131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.131 qpair failed and we were unable to recover it. 00:22:22.131 [2024-05-15 01:09:34.416351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.416540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.416564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.131 qpair failed and we were unable to recover it. 00:22:22.131 [2024-05-15 01:09:34.416778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.416969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.416994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.131 qpair failed and we were unable to recover it. 00:22:22.131 [2024-05-15 01:09:34.417188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.417417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.417458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.131 qpair failed and we were unable to recover it. 00:22:22.131 [2024-05-15 01:09:34.417643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.417837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.417862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.131 qpair failed and we were unable to recover it. 00:22:22.131 [2024-05-15 01:09:34.418030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.418215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.418239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.131 qpair failed and we were unable to recover it. 00:22:22.131 [2024-05-15 01:09:34.418457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.418647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.418670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.131 qpair failed and we were unable to recover it. 00:22:22.131 [2024-05-15 01:09:34.418839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.419028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.419058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.131 qpair failed and we were unable to recover it. 00:22:22.131 [2024-05-15 01:09:34.419283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.131 [2024-05-15 01:09:34.419474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.419501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.132 qpair failed and we were unable to recover it. 00:22:22.132 [2024-05-15 01:09:34.419692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.419884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.419908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.132 qpair failed and we were unable to recover it. 00:22:22.132 [2024-05-15 01:09:34.420105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.420321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.420346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.132 qpair failed and we were unable to recover it. 00:22:22.132 [2024-05-15 01:09:34.420514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.420703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.420728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.132 qpair failed and we were unable to recover it. 00:22:22.132 [2024-05-15 01:09:34.420948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.421181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.421205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.132 qpair failed and we were unable to recover it. 00:22:22.132 [2024-05-15 01:09:34.421398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.421615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.421640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.132 qpair failed and we were unable to recover it. 00:22:22.132 [2024-05-15 01:09:34.421854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.422043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.422070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.132 qpair failed and we were unable to recover it. 00:22:22.132 [2024-05-15 01:09:34.422283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.422462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.422489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.132 qpair failed and we were unable to recover it. 00:22:22.132 [2024-05-15 01:09:34.422725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.422910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.422943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.132 qpair failed and we were unable to recover it. 00:22:22.132 [2024-05-15 01:09:34.423186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.423410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.423438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.132 qpair failed and we were unable to recover it. 00:22:22.132 [2024-05-15 01:09:34.423634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.423828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.423853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.132 qpair failed and we were unable to recover it. 00:22:22.132 [2024-05-15 01:09:34.424018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.424187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.424212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.132 qpair failed and we were unable to recover it. 00:22:22.132 [2024-05-15 01:09:34.424405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.424631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.424681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.132 qpair failed and we were unable to recover it. 00:22:22.132 [2024-05-15 01:09:34.424939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.425120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.425145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.132 qpair failed and we were unable to recover it. 00:22:22.132 [2024-05-15 01:09:34.425391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.425556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.425580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.132 qpair failed and we were unable to recover it. 00:22:22.132 [2024-05-15 01:09:34.425797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.425959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.425983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.132 qpair failed and we were unable to recover it. 00:22:22.132 [2024-05-15 01:09:34.426162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.426385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.426410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.132 qpair failed and we were unable to recover it. 00:22:22.132 [2024-05-15 01:09:34.426592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.426781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.426806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.132 qpair failed and we were unable to recover it. 00:22:22.132 [2024-05-15 01:09:34.427012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.427239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.427265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.132 qpair failed and we were unable to recover it. 00:22:22.132 [2024-05-15 01:09:34.427481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.427664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.427693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.132 qpair failed and we were unable to recover it. 00:22:22.132 [2024-05-15 01:09:34.427867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.428071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.428098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.132 qpair failed and we were unable to recover it. 00:22:22.132 [2024-05-15 01:09:34.428338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.428547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.428571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.132 qpair failed and we were unable to recover it. 00:22:22.132 [2024-05-15 01:09:34.428802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.429008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.429032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.132 qpair failed and we were unable to recover it. 00:22:22.132 [2024-05-15 01:09:34.429224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.429426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.429451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.132 qpair failed and we were unable to recover it. 00:22:22.132 [2024-05-15 01:09:34.429644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.429846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.429875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.132 qpair failed and we were unable to recover it. 00:22:22.132 [2024-05-15 01:09:34.430090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.430304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.430328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.132 qpair failed and we were unable to recover it. 00:22:22.132 [2024-05-15 01:09:34.430515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.430710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.430735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.132 qpair failed and we were unable to recover it. 00:22:22.132 [2024-05-15 01:09:34.430901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.431142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.431167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.132 qpair failed and we were unable to recover it. 00:22:22.132 [2024-05-15 01:09:34.431360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.431580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.431604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.132 qpair failed and we were unable to recover it. 00:22:22.132 [2024-05-15 01:09:34.431773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.432007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.132 [2024-05-15 01:09:34.432036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.132 qpair failed and we were unable to recover it. 00:22:22.132 [2024-05-15 01:09:34.432256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.432469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.432494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.133 qpair failed and we were unable to recover it. 00:22:22.133 [2024-05-15 01:09:34.432713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.432967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.433000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.133 qpair failed and we were unable to recover it. 00:22:22.133 [2024-05-15 01:09:34.433167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.433355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.433380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.133 qpair failed and we were unable to recover it. 00:22:22.133 [2024-05-15 01:09:34.433575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.433733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.433759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.133 qpair failed and we were unable to recover it. 00:22:22.133 [2024-05-15 01:09:34.433994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.434210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.434234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.133 qpair failed and we were unable to recover it. 00:22:22.133 [2024-05-15 01:09:34.434432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.434590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.434613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.133 qpair failed and we were unable to recover it. 00:22:22.133 [2024-05-15 01:09:34.434806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.434995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.435021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.133 qpair failed and we were unable to recover it. 00:22:22.133 [2024-05-15 01:09:34.435238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.435423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.435462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.133 qpair failed and we were unable to recover it. 00:22:22.133 [2024-05-15 01:09:34.435637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.435798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.435825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.133 qpair failed and we were unable to recover it. 00:22:22.133 [2024-05-15 01:09:34.436019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.436207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.436232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.133 qpair failed and we were unable to recover it. 00:22:22.133 [2024-05-15 01:09:34.436429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.436639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.436669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.133 qpair failed and we were unable to recover it. 00:22:22.133 [2024-05-15 01:09:34.436911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.437095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.437120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.133 qpair failed and we were unable to recover it. 00:22:22.133 [2024-05-15 01:09:34.437357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.437720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.437776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.133 qpair failed and we were unable to recover it. 00:22:22.133 [2024-05-15 01:09:34.438016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.438173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.438198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.133 qpair failed and we were unable to recover it. 00:22:22.133 [2024-05-15 01:09:34.438392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.438735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.438790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.133 qpair failed and we were unable to recover it. 00:22:22.133 [2024-05-15 01:09:34.439021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.439237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.439264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.133 qpair failed and we were unable to recover it. 00:22:22.133 [2024-05-15 01:09:34.439516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.439693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.439718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.133 qpair failed and we were unable to recover it. 00:22:22.133 [2024-05-15 01:09:34.439950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.440141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.440165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.133 qpair failed and we were unable to recover it. 00:22:22.133 [2024-05-15 01:09:34.440358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.440591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.440617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.133 qpair failed and we were unable to recover it. 00:22:22.133 [2024-05-15 01:09:34.440824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.441041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.441067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.133 qpair failed and we were unable to recover it. 00:22:22.133 [2024-05-15 01:09:34.441259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.441426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.441451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.133 qpair failed and we were unable to recover it. 00:22:22.133 [2024-05-15 01:09:34.441645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.441834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.441860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.133 qpair failed and we were unable to recover it. 00:22:22.133 [2024-05-15 01:09:34.442107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.442358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.442382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.133 qpair failed and we were unable to recover it. 00:22:22.133 [2024-05-15 01:09:34.442549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.442718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.442743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.133 qpair failed and we were unable to recover it. 00:22:22.133 [2024-05-15 01:09:34.442939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.443161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.443186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.133 qpair failed and we were unable to recover it. 00:22:22.133 [2024-05-15 01:09:34.443416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.443599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.443623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.133 qpair failed and we were unable to recover it. 00:22:22.133 [2024-05-15 01:09:34.443818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.443974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.444000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.133 qpair failed and we were unable to recover it. 00:22:22.133 [2024-05-15 01:09:34.444214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.444412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.444440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.133 qpair failed and we were unable to recover it. 00:22:22.133 [2024-05-15 01:09:34.444611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.444825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.444852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.133 qpair failed and we were unable to recover it. 00:22:22.133 [2024-05-15 01:09:34.445042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.445229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.133 [2024-05-15 01:09:34.445255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.133 qpair failed and we were unable to recover it. 00:22:22.134 [2024-05-15 01:09:34.445488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.445648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.445672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.134 qpair failed and we were unable to recover it. 00:22:22.134 [2024-05-15 01:09:34.445860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.446048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.446074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.134 qpair failed and we were unable to recover it. 00:22:22.134 [2024-05-15 01:09:34.446239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.446392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.446418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.134 qpair failed and we were unable to recover it. 00:22:22.134 [2024-05-15 01:09:34.446585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.446774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.446798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.134 qpair failed and we were unable to recover it. 00:22:22.134 [2024-05-15 01:09:34.446961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.447205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.447233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.134 qpair failed and we were unable to recover it. 00:22:22.134 [2024-05-15 01:09:34.447444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.447638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.447680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.134 qpair failed and we were unable to recover it. 00:22:22.134 [2024-05-15 01:09:34.447914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.448170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.448197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.134 qpair failed and we were unable to recover it. 00:22:22.134 [2024-05-15 01:09:34.448357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.448581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.448606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.134 qpair failed and we were unable to recover it. 00:22:22.134 [2024-05-15 01:09:34.448830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.449018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.449043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.134 qpair failed and we were unable to recover it. 00:22:22.134 [2024-05-15 01:09:34.449216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.449404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.449428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.134 qpair failed and we were unable to recover it. 00:22:22.134 [2024-05-15 01:09:34.449664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.449866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.449893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.134 qpair failed and we were unable to recover it. 00:22:22.134 [2024-05-15 01:09:34.450109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.450296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.450321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.134 qpair failed and we were unable to recover it. 00:22:22.134 [2024-05-15 01:09:34.450537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.450694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.450734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.134 qpair failed and we were unable to recover it. 00:22:22.134 [2024-05-15 01:09:34.450948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.451102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.451127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.134 qpair failed and we were unable to recover it. 00:22:22.134 [2024-05-15 01:09:34.451317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.451504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.451529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.134 qpair failed and we were unable to recover it. 00:22:22.134 [2024-05-15 01:09:34.451717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.451903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.451927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.134 qpair failed and we were unable to recover it. 00:22:22.134 [2024-05-15 01:09:34.452126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.452383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.452429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.134 qpair failed and we were unable to recover it. 00:22:22.134 [2024-05-15 01:09:34.452647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.452880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.452908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.134 qpair failed and we were unable to recover it. 00:22:22.134 [2024-05-15 01:09:34.453136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.453343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.453372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.134 qpair failed and we were unable to recover it. 00:22:22.134 [2024-05-15 01:09:34.453546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.453753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.453781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.134 qpair failed and we were unable to recover it. 00:22:22.134 [2024-05-15 01:09:34.454000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.454212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.454241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.134 qpair failed and we were unable to recover it. 00:22:22.134 [2024-05-15 01:09:34.454488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.454726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.454754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.134 qpair failed and we were unable to recover it. 00:22:22.134 [2024-05-15 01:09:34.454940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.455177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.455205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.134 qpair failed and we were unable to recover it. 00:22:22.134 [2024-05-15 01:09:34.455389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.455606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.455631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.134 qpair failed and we were unable to recover it. 00:22:22.134 [2024-05-15 01:09:34.455880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.456088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.456116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.134 qpair failed and we were unable to recover it. 00:22:22.134 [2024-05-15 01:09:34.456324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.456579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.456602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.134 qpair failed and we were unable to recover it. 00:22:22.134 [2024-05-15 01:09:34.456805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.457019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.457047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.134 qpair failed and we were unable to recover it. 00:22:22.134 [2024-05-15 01:09:34.457262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.457442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.457467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.134 qpair failed and we were unable to recover it. 00:22:22.134 [2024-05-15 01:09:34.457692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.457900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.457927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.134 qpair failed and we were unable to recover it. 00:22:22.134 [2024-05-15 01:09:34.458127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.134 [2024-05-15 01:09:34.458416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.458442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.135 qpair failed and we were unable to recover it. 00:22:22.135 [2024-05-15 01:09:34.458650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.458824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.458850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.135 qpair failed and we were unable to recover it. 00:22:22.135 [2024-05-15 01:09:34.459038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.459293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.459341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.135 qpair failed and we were unable to recover it. 00:22:22.135 [2024-05-15 01:09:34.459626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.459876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.459902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.135 qpair failed and we were unable to recover it. 00:22:22.135 [2024-05-15 01:09:34.460130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.460386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.460412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.135 qpair failed and we were unable to recover it. 00:22:22.135 [2024-05-15 01:09:34.460577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.460762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.460792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.135 qpair failed and we were unable to recover it. 00:22:22.135 [2024-05-15 01:09:34.461028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.461238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.461263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.135 qpair failed and we were unable to recover it. 00:22:22.135 [2024-05-15 01:09:34.461475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.461676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.461700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.135 qpair failed and we were unable to recover it. 00:22:22.135 [2024-05-15 01:09:34.461908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.462120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.462148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.135 qpair failed and we were unable to recover it. 00:22:22.135 [2024-05-15 01:09:34.462355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.462556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.462579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.135 qpair failed and we were unable to recover it. 00:22:22.135 [2024-05-15 01:09:34.462805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.463018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.463047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.135 qpair failed and we were unable to recover it. 00:22:22.135 [2024-05-15 01:09:34.463252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.463490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.463518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.135 qpair failed and we were unable to recover it. 00:22:22.135 [2024-05-15 01:09:34.463734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.463970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.463997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.135 qpair failed and we were unable to recover it. 00:22:22.135 [2024-05-15 01:09:34.464188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.464372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.464399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.135 qpair failed and we were unable to recover it. 00:22:22.135 [2024-05-15 01:09:34.464610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.464841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.464868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.135 qpair failed and we were unable to recover it. 00:22:22.135 [2024-05-15 01:09:34.465077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.465286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.465313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.135 qpair failed and we were unable to recover it. 00:22:22.135 [2024-05-15 01:09:34.465549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.465763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.465788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.135 qpair failed and we were unable to recover it. 00:22:22.135 [2024-05-15 01:09:34.466000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.466207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.466231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.135 qpair failed and we were unable to recover it. 00:22:22.135 [2024-05-15 01:09:34.466439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.466725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.466771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.135 qpair failed and we were unable to recover it. 00:22:22.135 [2024-05-15 01:09:34.466989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.467163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.467189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.135 qpair failed and we were unable to recover it. 00:22:22.135 [2024-05-15 01:09:34.467392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.467548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.467572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.135 qpair failed and we were unable to recover it. 00:22:22.135 [2024-05-15 01:09:34.467776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.467960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.468000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.135 qpair failed and we were unable to recover it. 00:22:22.135 [2024-05-15 01:09:34.468206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.468428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.468453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.135 qpair failed and we were unable to recover it. 00:22:22.135 [2024-05-15 01:09:34.468694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.468892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.468915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.135 qpair failed and we were unable to recover it. 00:22:22.135 [2024-05-15 01:09:34.469103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.469271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.135 [2024-05-15 01:09:34.469299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.135 qpair failed and we were unable to recover it. 00:22:22.135 [2024-05-15 01:09:34.469479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.469767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.469791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.136 qpair failed and we were unable to recover it. 00:22:22.136 [2024-05-15 01:09:34.470027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.470219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.470247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.136 qpair failed and we were unable to recover it. 00:22:22.136 [2024-05-15 01:09:34.470471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.470683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.470706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.136 qpair failed and we were unable to recover it. 00:22:22.136 [2024-05-15 01:09:34.470895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.471106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.471131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.136 qpair failed and we were unable to recover it. 00:22:22.136 [2024-05-15 01:09:34.471440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.471606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.471647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.136 qpair failed and we were unable to recover it. 00:22:22.136 [2024-05-15 01:09:34.471855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.472052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.472078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.136 qpair failed and we were unable to recover it. 00:22:22.136 [2024-05-15 01:09:34.472241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.472464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.472489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.136 qpair failed and we were unable to recover it. 00:22:22.136 [2024-05-15 01:09:34.472712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.472899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.472928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.136 qpair failed and we were unable to recover it. 00:22:22.136 [2024-05-15 01:09:34.473174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.473384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.473411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.136 qpair failed and we were unable to recover it. 00:22:22.136 [2024-05-15 01:09:34.473608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.473777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.473801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.136 qpair failed and we were unable to recover it. 00:22:22.136 [2024-05-15 01:09:34.473956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.474180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.474205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.136 qpair failed and we were unable to recover it. 00:22:22.136 [2024-05-15 01:09:34.474442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.474614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.474642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.136 qpair failed and we were unable to recover it. 00:22:22.136 [2024-05-15 01:09:34.474869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.475081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.475106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.136 qpair failed and we were unable to recover it. 00:22:22.136 [2024-05-15 01:09:34.475314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.475522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.475549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.136 qpair failed and we were unable to recover it. 00:22:22.136 [2024-05-15 01:09:34.475786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.476063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.476109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.136 qpair failed and we were unable to recover it. 00:22:22.136 [2024-05-15 01:09:34.476347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.476646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.476697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.136 qpair failed and we were unable to recover it. 00:22:22.136 [2024-05-15 01:09:34.476935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.477142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.477182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.136 qpair failed and we were unable to recover it. 00:22:22.136 [2024-05-15 01:09:34.477399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.477667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.477718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.136 qpair failed and we were unable to recover it. 00:22:22.136 [2024-05-15 01:09:34.477903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.478128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.478156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.136 qpair failed and we were unable to recover it. 00:22:22.136 [2024-05-15 01:09:34.478390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.478595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.478619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.136 qpair failed and we were unable to recover it. 00:22:22.136 [2024-05-15 01:09:34.478821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.479086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.479114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.136 qpair failed and we were unable to recover it. 00:22:22.136 [2024-05-15 01:09:34.479292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.479503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.479532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.136 qpair failed and we were unable to recover it. 00:22:22.136 [2024-05-15 01:09:34.479769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.479961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.479986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.136 qpair failed and we were unable to recover it. 00:22:22.136 [2024-05-15 01:09:34.480166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.480365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.480391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.136 qpair failed and we were unable to recover it. 00:22:22.136 [2024-05-15 01:09:34.480576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.480810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.480837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.136 qpair failed and we were unable to recover it. 00:22:22.136 [2024-05-15 01:09:34.481079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.481280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.481304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.136 qpair failed and we were unable to recover it. 00:22:22.136 [2024-05-15 01:09:34.481480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.481702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.481727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.136 qpair failed and we were unable to recover it. 00:22:22.136 [2024-05-15 01:09:34.481987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.482145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.482170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.136 qpair failed and we were unable to recover it. 00:22:22.136 [2024-05-15 01:09:34.482341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.482508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.482538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.136 qpair failed and we were unable to recover it. 00:22:22.136 [2024-05-15 01:09:34.482780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.482945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.136 [2024-05-15 01:09:34.482970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.136 qpair failed and we were unable to recover it. 00:22:22.137 [2024-05-15 01:09:34.483160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.483382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.483406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.137 qpair failed and we were unable to recover it. 00:22:22.137 [2024-05-15 01:09:34.483658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.483882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.483906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.137 qpair failed and we were unable to recover it. 00:22:22.137 [2024-05-15 01:09:34.484155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.484346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.484372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.137 qpair failed and we were unable to recover it. 00:22:22.137 [2024-05-15 01:09:34.484593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.484960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.484988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.137 qpair failed and we were unable to recover it. 00:22:22.137 [2024-05-15 01:09:34.485210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.485464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.485518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.137 qpair failed and we were unable to recover it. 00:22:22.137 [2024-05-15 01:09:34.485758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.485957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.485984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.137 qpair failed and we were unable to recover it. 00:22:22.137 [2024-05-15 01:09:34.486171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.486353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.486387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.137 qpair failed and we were unable to recover it. 00:22:22.137 [2024-05-15 01:09:34.486599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.486831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.486855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.137 qpair failed and we were unable to recover it. 00:22:22.137 [2024-05-15 01:09:34.487046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.487255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.487282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.137 qpair failed and we were unable to recover it. 00:22:22.137 [2024-05-15 01:09:34.487500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.487717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.487744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.137 qpair failed and we were unable to recover it. 00:22:22.137 [2024-05-15 01:09:34.487973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.488127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.488151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.137 qpair failed and we were unable to recover it. 00:22:22.137 [2024-05-15 01:09:34.488370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.488560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.488584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.137 qpair failed and we were unable to recover it. 00:22:22.137 [2024-05-15 01:09:34.488794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.489007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.489032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.137 qpair failed and we were unable to recover it. 00:22:22.137 [2024-05-15 01:09:34.489242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.489446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.489474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.137 qpair failed and we were unable to recover it. 00:22:22.137 [2024-05-15 01:09:34.489715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.489896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.489921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.137 qpair failed and we were unable to recover it. 00:22:22.137 [2024-05-15 01:09:34.490121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.490278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.490303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.137 qpair failed and we were unable to recover it. 00:22:22.137 [2024-05-15 01:09:34.490520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.490764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.490816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.137 qpair failed and we were unable to recover it. 00:22:22.137 [2024-05-15 01:09:34.491029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.491240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.491266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.137 qpair failed and we were unable to recover it. 00:22:22.137 [2024-05-15 01:09:34.491456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.491661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.491684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.137 qpair failed and we were unable to recover it. 00:22:22.137 [2024-05-15 01:09:34.491888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.492069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.492099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.137 qpair failed and we were unable to recover it. 00:22:22.137 [2024-05-15 01:09:34.492335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.492519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.492544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.137 qpair failed and we were unable to recover it. 00:22:22.137 [2024-05-15 01:09:34.492708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.492898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.492922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.137 qpair failed and we were unable to recover it. 00:22:22.137 [2024-05-15 01:09:34.493148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.493429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.493480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.137 qpair failed and we were unable to recover it. 00:22:22.137 [2024-05-15 01:09:34.493706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.493891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.493941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.137 qpair failed and we were unable to recover it. 00:22:22.137 [2024-05-15 01:09:34.494136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.494363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.494388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.137 qpair failed and we were unable to recover it. 00:22:22.137 [2024-05-15 01:09:34.494619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.494888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.494913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.137 qpair failed and we were unable to recover it. 00:22:22.137 [2024-05-15 01:09:34.495138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.495325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.495358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.137 qpair failed and we were unable to recover it. 00:22:22.137 [2024-05-15 01:09:34.495607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.495805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.495831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.137 qpair failed and we were unable to recover it. 00:22:22.137 [2024-05-15 01:09:34.496054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.496239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.496267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.137 qpair failed and we were unable to recover it. 00:22:22.137 [2024-05-15 01:09:34.496501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.496664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.137 [2024-05-15 01:09:34.496688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.138 qpair failed and we were unable to recover it. 00:22:22.138 [2024-05-15 01:09:34.496880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.497075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.497103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.138 qpair failed and we were unable to recover it. 00:22:22.138 [2024-05-15 01:09:34.497355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.497525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.497554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.138 qpair failed and we were unable to recover it. 00:22:22.138 [2024-05-15 01:09:34.497739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.497911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.497940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.138 qpair failed and we were unable to recover it. 00:22:22.138 [2024-05-15 01:09:34.498136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.498295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.498318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.138 qpair failed and we were unable to recover it. 00:22:22.138 [2024-05-15 01:09:34.498503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.498693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.498718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.138 qpair failed and we were unable to recover it. 00:22:22.138 [2024-05-15 01:09:34.498883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.499092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.499119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.138 qpair failed and we were unable to recover it. 00:22:22.138 [2024-05-15 01:09:34.499352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.499510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.499554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.138 qpair failed and we were unable to recover it. 00:22:22.138 [2024-05-15 01:09:34.499789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.500010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.500035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.138 qpair failed and we were unable to recover it. 00:22:22.138 [2024-05-15 01:09:34.500262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.500439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.500465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.138 qpair failed and we were unable to recover it. 00:22:22.138 [2024-05-15 01:09:34.500643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.500889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.500914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.138 qpair failed and we were unable to recover it. 00:22:22.138 [2024-05-15 01:09:34.501140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.501298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.501323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.138 qpair failed and we were unable to recover it. 00:22:22.138 [2024-05-15 01:09:34.501491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.501683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.501707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.138 qpair failed and we were unable to recover it. 00:22:22.138 [2024-05-15 01:09:34.501864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.502071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.502097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.138 qpair failed and we were unable to recover it. 00:22:22.138 [2024-05-15 01:09:34.502311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.502516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.502545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.138 qpair failed and we were unable to recover it. 00:22:22.138 [2024-05-15 01:09:34.502757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.502961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.502986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.138 qpair failed and we were unable to recover it. 00:22:22.138 [2024-05-15 01:09:34.503223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.503417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.503445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.138 qpair failed and we were unable to recover it. 00:22:22.138 [2024-05-15 01:09:34.503660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.503877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.503903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.138 qpair failed and we were unable to recover it. 00:22:22.138 [2024-05-15 01:09:34.504128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.504339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.504366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.138 qpair failed and we were unable to recover it. 00:22:22.138 [2024-05-15 01:09:34.504548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.504786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.504813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.138 qpair failed and we were unable to recover it. 00:22:22.138 [2024-05-15 01:09:34.504997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.505236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.505263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.138 qpair failed and we were unable to recover it. 00:22:22.138 [2024-05-15 01:09:34.505455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.505645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.505669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.138 qpair failed and we were unable to recover it. 00:22:22.138 [2024-05-15 01:09:34.505831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.506056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.506081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.138 qpair failed and we were unable to recover it. 00:22:22.138 [2024-05-15 01:09:34.506276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.506474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.506499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.138 qpair failed and we were unable to recover it. 00:22:22.138 [2024-05-15 01:09:34.506765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.506988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.507014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.138 qpair failed and we were unable to recover it. 00:22:22.138 [2024-05-15 01:09:34.507220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.507430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.507457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.138 qpair failed and we were unable to recover it. 00:22:22.138 [2024-05-15 01:09:34.507696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.507911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.507941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.138 qpair failed and we were unable to recover it. 00:22:22.138 [2024-05-15 01:09:34.508171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.508375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.508402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.138 qpair failed and we were unable to recover it. 00:22:22.138 [2024-05-15 01:09:34.508607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.508836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.508861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.138 qpair failed and we were unable to recover it. 00:22:22.138 [2024-05-15 01:09:34.509081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.509331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.509359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.138 qpair failed and we were unable to recover it. 00:22:22.138 [2024-05-15 01:09:34.509578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.138 [2024-05-15 01:09:34.509782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.139 [2024-05-15 01:09:34.509810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.139 qpair failed and we were unable to recover it. 00:22:22.139 [2024-05-15 01:09:34.510052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.139 [2024-05-15 01:09:34.510235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.139 [2024-05-15 01:09:34.510259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.139 qpair failed and we were unable to recover it. 00:22:22.139 [2024-05-15 01:09:34.510509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.139 [2024-05-15 01:09:34.510726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.139 [2024-05-15 01:09:34.510752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.139 qpair failed and we were unable to recover it. 00:22:22.139 [2024-05-15 01:09:34.510963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.139 [2024-05-15 01:09:34.511183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.139 [2024-05-15 01:09:34.511210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.139 qpair failed and we were unable to recover it. 00:22:22.139 [2024-05-15 01:09:34.511456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.139 [2024-05-15 01:09:34.511618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.139 [2024-05-15 01:09:34.511643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.139 qpair failed and we were unable to recover it. 00:22:22.139 [2024-05-15 01:09:34.511807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.139 [2024-05-15 01:09:34.511984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.139 [2024-05-15 01:09:34.512009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.139 qpair failed and we were unable to recover it. 00:22:22.139 [2024-05-15 01:09:34.512245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.139 [2024-05-15 01:09:34.512439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.139 [2024-05-15 01:09:34.512496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.139 qpair failed and we were unable to recover it. 00:22:22.139 [2024-05-15 01:09:34.512697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.139 [2024-05-15 01:09:34.512888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.139 [2024-05-15 01:09:34.512912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.139 qpair failed and we were unable to recover it. 00:22:22.139 [2024-05-15 01:09:34.513123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.139 [2024-05-15 01:09:34.513319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.139 [2024-05-15 01:09:34.513344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.139 qpair failed and we were unable to recover it. 00:22:22.139 [2024-05-15 01:09:34.513734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.139 [2024-05-15 01:09:34.513905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.139 [2024-05-15 01:09:34.513948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.139 qpair failed and we were unable to recover it. 00:22:22.139 [2024-05-15 01:09:34.514117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.139 [2024-05-15 01:09:34.514339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.139 [2024-05-15 01:09:34.514366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.139 qpair failed and we were unable to recover it. 00:22:22.139 [2024-05-15 01:09:34.514548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.139 [2024-05-15 01:09:34.514760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.139 [2024-05-15 01:09:34.514784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.139 qpair failed and we were unable to recover it. 00:22:22.139 [2024-05-15 01:09:34.514955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.139 [2024-05-15 01:09:34.515113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.139 [2024-05-15 01:09:34.515155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.139 qpair failed and we were unable to recover it. 00:22:22.139 [2024-05-15 01:09:34.515335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.139 [2024-05-15 01:09:34.515531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.139 [2024-05-15 01:09:34.515558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.139 qpair failed and we were unable to recover it. 00:22:22.139 [2024-05-15 01:09:34.515768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.139 [2024-05-15 01:09:34.515965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.139 [2024-05-15 01:09:34.515992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.139 qpair failed and we were unable to recover it. 00:22:22.139 [2024-05-15 01:09:34.516191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.411 [2024-05-15 01:09:34.516409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.411 [2024-05-15 01:09:34.516437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.411 qpair failed and we were unable to recover it. 00:22:22.411 [2024-05-15 01:09:34.516654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.411 [2024-05-15 01:09:34.516826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.411 [2024-05-15 01:09:34.516851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.411 qpair failed and we were unable to recover it. 00:22:22.411 [2024-05-15 01:09:34.517077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.411 [2024-05-15 01:09:34.517278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.411 [2024-05-15 01:09:34.517304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.411 qpair failed and we were unable to recover it. 00:22:22.411 [2024-05-15 01:09:34.517528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.411 [2024-05-15 01:09:34.517705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.411 [2024-05-15 01:09:34.517732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.411 qpair failed and we were unable to recover it. 00:22:22.411 [2024-05-15 01:09:34.517952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.411 [2024-05-15 01:09:34.518157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.411 [2024-05-15 01:09:34.518183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.411 qpair failed and we were unable to recover it. 00:22:22.411 [2024-05-15 01:09:34.518414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.411 [2024-05-15 01:09:34.518604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.411 [2024-05-15 01:09:34.518629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.411 qpair failed and we were unable to recover it. 00:22:22.411 [2024-05-15 01:09:34.518815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.411 [2024-05-15 01:09:34.519022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.519050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.412 qpair failed and we were unable to recover it. 00:22:22.412 [2024-05-15 01:09:34.519287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.519524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.519548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.412 qpair failed and we were unable to recover it. 00:22:22.412 [2024-05-15 01:09:34.519769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.519939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.519964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.412 qpair failed and we were unable to recover it. 00:22:22.412 [2024-05-15 01:09:34.520179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.520342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.520369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.412 qpair failed and we were unable to recover it. 00:22:22.412 [2024-05-15 01:09:34.520545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.520766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.520794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.412 qpair failed and we were unable to recover it. 00:22:22.412 [2024-05-15 01:09:34.521034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.521244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.521269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.412 qpair failed and we were unable to recover it. 00:22:22.412 [2024-05-15 01:09:34.521481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.521711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.521737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.412 qpair failed and we were unable to recover it. 00:22:22.412 [2024-05-15 01:09:34.521977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.522143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.522168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.412 qpair failed and we were unable to recover it. 00:22:22.412 [2024-05-15 01:09:34.522351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.522558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.522586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.412 qpair failed and we were unable to recover it. 00:22:22.412 [2024-05-15 01:09:34.522801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.523041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.523067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.412 qpair failed and we were unable to recover it. 00:22:22.412 [2024-05-15 01:09:34.523277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.523487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.523512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.412 qpair failed and we were unable to recover it. 00:22:22.412 [2024-05-15 01:09:34.523699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.523888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.523912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.412 qpair failed and we were unable to recover it. 00:22:22.412 [2024-05-15 01:09:34.524074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.524283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.524311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.412 qpair failed and we were unable to recover it. 00:22:22.412 [2024-05-15 01:09:34.524492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.524651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.524692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.412 qpair failed and we were unable to recover it. 00:22:22.412 [2024-05-15 01:09:34.524874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.525052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.525081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.412 qpair failed and we were unable to recover it. 00:22:22.412 [2024-05-15 01:09:34.525309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.525530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.525554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.412 qpair failed and we were unable to recover it. 00:22:22.412 [2024-05-15 01:09:34.525734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.525955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.525985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.412 qpair failed and we were unable to recover it. 00:22:22.412 [2024-05-15 01:09:34.526206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.526519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.526566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.412 qpair failed and we were unable to recover it. 00:22:22.412 [2024-05-15 01:09:34.526785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.526997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.527026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.412 qpair failed and we were unable to recover it. 00:22:22.412 [2024-05-15 01:09:34.527267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.527450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.527475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.412 qpair failed and we were unable to recover it. 00:22:22.412 [2024-05-15 01:09:34.527660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.527906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.527935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.412 qpair failed and we were unable to recover it. 00:22:22.412 [2024-05-15 01:09:34.528126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.528338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.528366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.412 qpair failed and we were unable to recover it. 00:22:22.412 [2024-05-15 01:09:34.528578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.528762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.528786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.412 qpair failed and we were unable to recover it. 00:22:22.412 [2024-05-15 01:09:34.529008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.529244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.529271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.412 qpair failed and we were unable to recover it. 00:22:22.412 [2024-05-15 01:09:34.529512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.529672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.529697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.412 qpair failed and we were unable to recover it. 00:22:22.412 [2024-05-15 01:09:34.529909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.530094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.530123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.412 qpair failed and we were unable to recover it. 00:22:22.412 [2024-05-15 01:09:34.530359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.530571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.530595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.412 qpair failed and we were unable to recover it. 00:22:22.412 [2024-05-15 01:09:34.530756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.530951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.530975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.412 qpair failed and we were unable to recover it. 00:22:22.412 [2024-05-15 01:09:34.531192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.531419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.531443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.412 qpair failed and we were unable to recover it. 00:22:22.412 [2024-05-15 01:09:34.531665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.531902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.412 [2024-05-15 01:09:34.531935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.412 qpair failed and we were unable to recover it. 00:22:22.413 [2024-05-15 01:09:34.532150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.532384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.532409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.413 qpair failed and we were unable to recover it. 00:22:22.413 [2024-05-15 01:09:34.532653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.532836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.532860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.413 qpair failed and we were unable to recover it. 00:22:22.413 [2024-05-15 01:09:34.533029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.533195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.533220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.413 qpair failed and we were unable to recover it. 00:22:22.413 [2024-05-15 01:09:34.533378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.533542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.533585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.413 qpair failed and we were unable to recover it. 00:22:22.413 [2024-05-15 01:09:34.533810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.534000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.534025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.413 qpair failed and we were unable to recover it. 00:22:22.413 [2024-05-15 01:09:34.534232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.534442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.534466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.413 qpair failed and we were unable to recover it. 00:22:22.413 [2024-05-15 01:09:34.534662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.534851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.534875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.413 qpair failed and we were unable to recover it. 00:22:22.413 [2024-05-15 01:09:34.535100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.535307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.535332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.413 qpair failed and we were unable to recover it. 00:22:22.413 [2024-05-15 01:09:34.535546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.535730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.535758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.413 qpair failed and we were unable to recover it. 00:22:22.413 [2024-05-15 01:09:34.535968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.536180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.536207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.413 qpair failed and we were unable to recover it. 00:22:22.413 [2024-05-15 01:09:34.536412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.536633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.536656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.413 qpair failed and we were unable to recover it. 00:22:22.413 [2024-05-15 01:09:34.536849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.537074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.537101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.413 qpair failed and we were unable to recover it. 00:22:22.413 [2024-05-15 01:09:34.537285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.537494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.537524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.413 qpair failed and we were unable to recover it. 00:22:22.413 [2024-05-15 01:09:34.537733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.537948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.537974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.413 qpair failed and we were unable to recover it. 00:22:22.413 [2024-05-15 01:09:34.538194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.538409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.538437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.413 qpair failed and we were unable to recover it. 00:22:22.413 [2024-05-15 01:09:34.538649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.538841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.538883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.413 qpair failed and we were unable to recover it. 00:22:22.413 [2024-05-15 01:09:34.539110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.539352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.539401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.413 qpair failed and we were unable to recover it. 00:22:22.413 [2024-05-15 01:09:34.539583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.539799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.539824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.413 qpair failed and we were unable to recover it. 00:22:22.413 [2024-05-15 01:09:34.540017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.540251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.540292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.413 qpair failed and we were unable to recover it. 00:22:22.413 [2024-05-15 01:09:34.540505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.540747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.540796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.413 qpair failed and we were unable to recover it. 00:22:22.413 [2024-05-15 01:09:34.541009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.541217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.541241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.413 qpair failed and we were unable to recover it. 00:22:22.413 [2024-05-15 01:09:34.541433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.541649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.541674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.413 qpair failed and we were unable to recover it. 00:22:22.413 [2024-05-15 01:09:34.541871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.542094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.542123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.413 qpair failed and we were unable to recover it. 00:22:22.413 [2024-05-15 01:09:34.542361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.542603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.542627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.413 qpair failed and we were unable to recover it. 00:22:22.413 [2024-05-15 01:09:34.542809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.543049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.543078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.413 qpair failed and we were unable to recover it. 00:22:22.413 [2024-05-15 01:09:34.543328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.543519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.543544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.413 qpair failed and we were unable to recover it. 00:22:22.413 [2024-05-15 01:09:34.543775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.543986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.544012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.413 qpair failed and we were unable to recover it. 00:22:22.413 [2024-05-15 01:09:34.544176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.544392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.544418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.413 qpair failed and we were unable to recover it. 00:22:22.413 [2024-05-15 01:09:34.544582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.544749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.544775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.413 qpair failed and we were unable to recover it. 00:22:22.413 [2024-05-15 01:09:34.545031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.545206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.413 [2024-05-15 01:09:34.545235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.414 qpair failed and we were unable to recover it. 00:22:22.414 [2024-05-15 01:09:34.545473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.545695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.545719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.414 qpair failed and we were unable to recover it. 00:22:22.414 [2024-05-15 01:09:34.545937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.546097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.546122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.414 qpair failed and we were unable to recover it. 00:22:22.414 [2024-05-15 01:09:34.546326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.546540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.546567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.414 qpair failed and we were unable to recover it. 00:22:22.414 [2024-05-15 01:09:34.546772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.547023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.547051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.414 qpair failed and we were unable to recover it. 00:22:22.414 [2024-05-15 01:09:34.547260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.547492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.547516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.414 qpair failed and we were unable to recover it. 00:22:22.414 [2024-05-15 01:09:34.547743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.547961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.547987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.414 qpair failed and we were unable to recover it. 00:22:22.414 [2024-05-15 01:09:34.548178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.548385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.548410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.414 qpair failed and we were unable to recover it. 00:22:22.414 [2024-05-15 01:09:34.548631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.548848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.548873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.414 qpair failed and we were unable to recover it. 00:22:22.414 [2024-05-15 01:09:34.549055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.549294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.549322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.414 qpair failed and we were unable to recover it. 00:22:22.414 [2024-05-15 01:09:34.549544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.549754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.549780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.414 qpair failed and we were unable to recover it. 00:22:22.414 [2024-05-15 01:09:34.550017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.550245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.550270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.414 qpair failed and we were unable to recover it. 00:22:22.414 [2024-05-15 01:09:34.550461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.550642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.550669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.414 qpair failed and we were unable to recover it. 00:22:22.414 [2024-05-15 01:09:34.550871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.551117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.551144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.414 qpair failed and we were unable to recover it. 00:22:22.414 [2024-05-15 01:09:34.551357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.551543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.551567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.414 qpair failed and we were unable to recover it. 00:22:22.414 [2024-05-15 01:09:34.551784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.552013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.552038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.414 qpair failed and we were unable to recover it. 00:22:22.414 [2024-05-15 01:09:34.552249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.552444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.552470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.414 qpair failed and we were unable to recover it. 00:22:22.414 [2024-05-15 01:09:34.552707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.552893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.552920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.414 qpair failed and we were unable to recover it. 00:22:22.414 [2024-05-15 01:09:34.553144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.553338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.553363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.414 qpair failed and we were unable to recover it. 00:22:22.414 [2024-05-15 01:09:34.553614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.553804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.553828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.414 qpair failed and we were unable to recover it. 00:22:22.414 [2024-05-15 01:09:34.554020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.554277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.554301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.414 qpair failed and we were unable to recover it. 00:22:22.414 [2024-05-15 01:09:34.554484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.554727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.554754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.414 qpair failed and we were unable to recover it. 00:22:22.414 [2024-05-15 01:09:34.554981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.555157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.555181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.414 qpair failed and we were unable to recover it. 00:22:22.414 [2024-05-15 01:09:34.555383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.555578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.555606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.414 qpair failed and we were unable to recover it. 00:22:22.414 [2024-05-15 01:09:34.555808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.556055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.556081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.414 qpair failed and we were unable to recover it. 00:22:22.414 [2024-05-15 01:09:34.556255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.556440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.556479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.414 qpair failed and we were unable to recover it. 00:22:22.414 [2024-05-15 01:09:34.556662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.556870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.556899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.414 qpair failed and we were unable to recover it. 00:22:22.414 [2024-05-15 01:09:34.557126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.557350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.557373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.414 qpair failed and we were unable to recover it. 00:22:22.414 [2024-05-15 01:09:34.557561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.557805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.557837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.414 qpair failed and we were unable to recover it. 00:22:22.414 [2024-05-15 01:09:34.558060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.558239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.414 [2024-05-15 01:09:34.558263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.414 qpair failed and we were unable to recover it. 00:22:22.414 [2024-05-15 01:09:34.558478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.558692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.558720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.415 qpair failed and we were unable to recover it. 00:22:22.415 [2024-05-15 01:09:34.558941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.559167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.559191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.415 qpair failed and we were unable to recover it. 00:22:22.415 [2024-05-15 01:09:34.559359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.559571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.559599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.415 qpair failed and we were unable to recover it. 00:22:22.415 [2024-05-15 01:09:34.559846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.560063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.560089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.415 qpair failed and we were unable to recover it. 00:22:22.415 [2024-05-15 01:09:34.560276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.560432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.560471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.415 qpair failed and we were unable to recover it. 00:22:22.415 [2024-05-15 01:09:34.560661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.560902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.560926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.415 qpair failed and we were unable to recover it. 00:22:22.415 [2024-05-15 01:09:34.561173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.561385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.561413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.415 qpair failed and we were unable to recover it. 00:22:22.415 [2024-05-15 01:09:34.561650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.561866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.561891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.415 qpair failed and we were unable to recover it. 00:22:22.415 [2024-05-15 01:09:34.562125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.562335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.562377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.415 qpair failed and we were unable to recover it. 00:22:22.415 [2024-05-15 01:09:34.562550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.562807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.562832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.415 qpair failed and we were unable to recover it. 00:22:22.415 [2024-05-15 01:09:34.563034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.563230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.563254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.415 qpair failed and we were unable to recover it. 00:22:22.415 [2024-05-15 01:09:34.563445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.563604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.563628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.415 qpair failed and we were unable to recover it. 00:22:22.415 [2024-05-15 01:09:34.563815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.564104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.564131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.415 qpair failed and we were unable to recover it. 00:22:22.415 [2024-05-15 01:09:34.564338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.564547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.564573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.415 qpair failed and we were unable to recover it. 00:22:22.415 [2024-05-15 01:09:34.564814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.565023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.565049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.415 qpair failed and we were unable to recover it. 00:22:22.415 [2024-05-15 01:09:34.565245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.565463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.565491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.415 qpair failed and we were unable to recover it. 00:22:22.415 [2024-05-15 01:09:34.565668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.565875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.565902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.415 qpair failed and we were unable to recover it. 00:22:22.415 [2024-05-15 01:09:34.566130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.566311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.566336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.415 qpair failed and we were unable to recover it. 00:22:22.415 [2024-05-15 01:09:34.566544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.566734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.566762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.415 qpair failed and we were unable to recover it. 00:22:22.415 [2024-05-15 01:09:34.566955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.567161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.567187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.415 qpair failed and we were unable to recover it. 00:22:22.415 [2024-05-15 01:09:34.567400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.567616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.567641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.415 qpair failed and we were unable to recover it. 00:22:22.415 [2024-05-15 01:09:34.567866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.568112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.568140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.415 qpair failed and we were unable to recover it. 00:22:22.415 [2024-05-15 01:09:34.568317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.568483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.568507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.415 qpair failed and we were unable to recover it. 00:22:22.415 [2024-05-15 01:09:34.568672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.568860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.568884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.415 qpair failed and we were unable to recover it. 00:22:22.415 [2024-05-15 01:09:34.569094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.569313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.569343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.415 qpair failed and we were unable to recover it. 00:22:22.415 [2024-05-15 01:09:34.569560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.569770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.569798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.415 qpair failed and we were unable to recover it. 00:22:22.415 [2024-05-15 01:09:34.570037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.570252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.570275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.415 qpair failed and we were unable to recover it. 00:22:22.415 [2024-05-15 01:09:34.570467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.570668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.570694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.415 qpair failed and we were unable to recover it. 00:22:22.415 [2024-05-15 01:09:34.570857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.571109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.571142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.415 qpair failed and we were unable to recover it. 00:22:22.415 [2024-05-15 01:09:34.571381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.571544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.415 [2024-05-15 01:09:34.571571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.415 qpair failed and we were unable to recover it. 00:22:22.416 [2024-05-15 01:09:34.571769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.571998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.572024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.416 qpair failed and we were unable to recover it. 00:22:22.416 [2024-05-15 01:09:34.572195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.572353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.572378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.416 qpair failed and we were unable to recover it. 00:22:22.416 [2024-05-15 01:09:34.572540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.572734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.572758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.416 qpair failed and we were unable to recover it. 00:22:22.416 [2024-05-15 01:09:34.572986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.573143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.573167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.416 qpair failed and we were unable to recover it. 00:22:22.416 [2024-05-15 01:09:34.573325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.573663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.573716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.416 qpair failed and we were unable to recover it. 00:22:22.416 [2024-05-15 01:09:34.573912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.574158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.574184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.416 qpair failed and we were unable to recover it. 00:22:22.416 [2024-05-15 01:09:34.574401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.574587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.574611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.416 qpair failed and we were unable to recover it. 00:22:22.416 [2024-05-15 01:09:34.574772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.574958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.574984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.416 qpair failed and we were unable to recover it. 00:22:22.416 [2024-05-15 01:09:34.575175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.575394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.575418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.416 qpair failed and we were unable to recover it. 00:22:22.416 [2024-05-15 01:09:34.575636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.575874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.575898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.416 qpair failed and we were unable to recover it. 00:22:22.416 [2024-05-15 01:09:34.576140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.576339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.576365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.416 qpair failed and we were unable to recover it. 00:22:22.416 [2024-05-15 01:09:34.576555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.576771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.576796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.416 qpair failed and we were unable to recover it. 00:22:22.416 [2024-05-15 01:09:34.577000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.577344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.577397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.416 qpair failed and we were unable to recover it. 00:22:22.416 [2024-05-15 01:09:34.577584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.577785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.577809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.416 qpair failed and we were unable to recover it. 00:22:22.416 [2024-05-15 01:09:34.578027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.578232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.578256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.416 qpair failed and we were unable to recover it. 00:22:22.416 [2024-05-15 01:09:34.578420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.578626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.578651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.416 qpair failed and we were unable to recover it. 00:22:22.416 [2024-05-15 01:09:34.578830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.579015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.579084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.416 qpair failed and we were unable to recover it. 00:22:22.416 [2024-05-15 01:09:34.579293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.579473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.579497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.416 qpair failed and we were unable to recover it. 00:22:22.416 [2024-05-15 01:09:34.579719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.579972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.580013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.416 qpair failed and we were unable to recover it. 00:22:22.416 [2024-05-15 01:09:34.580227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.580410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.580438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.416 qpair failed and we were unable to recover it. 00:22:22.416 [2024-05-15 01:09:34.580662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.580877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.580901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.416 qpair failed and we were unable to recover it. 00:22:22.416 [2024-05-15 01:09:34.581097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.581488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.581514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.416 qpair failed and we were unable to recover it. 00:22:22.416 [2024-05-15 01:09:34.581729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.581895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.581920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.416 qpair failed and we were unable to recover it. 00:22:22.416 [2024-05-15 01:09:34.582088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.582273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.416 [2024-05-15 01:09:34.582300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.416 qpair failed and we were unable to recover it. 00:22:22.417 [2024-05-15 01:09:34.582539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.582734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.582764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.417 qpair failed and we were unable to recover it. 00:22:22.417 [2024-05-15 01:09:34.582941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.583149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.583177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.417 qpair failed and we were unable to recover it. 00:22:22.417 [2024-05-15 01:09:34.583421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.583598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.583622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.417 qpair failed and we were unable to recover it. 00:22:22.417 [2024-05-15 01:09:34.583782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.584001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.584026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.417 qpair failed and we were unable to recover it. 00:22:22.417 [2024-05-15 01:09:34.584248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.584472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.584504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.417 qpair failed and we were unable to recover it. 00:22:22.417 [2024-05-15 01:09:34.584701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.584944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.584974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.417 qpair failed and we were unable to recover it. 00:22:22.417 [2024-05-15 01:09:34.585193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.585409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.585433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.417 qpair failed and we were unable to recover it. 00:22:22.417 [2024-05-15 01:09:34.585623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.585813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.585852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.417 qpair failed and we were unable to recover it. 00:22:22.417 [2024-05-15 01:09:34.586047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.586284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.586312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.417 qpair failed and we were unable to recover it. 00:22:22.417 [2024-05-15 01:09:34.586497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.586682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.586707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.417 qpair failed and we were unable to recover it. 00:22:22.417 [2024-05-15 01:09:34.586978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.587384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.587434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.417 qpair failed and we were unable to recover it. 00:22:22.417 [2024-05-15 01:09:34.587674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.587926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.587974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.417 qpair failed and we were unable to recover it. 00:22:22.417 [2024-05-15 01:09:34.588237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.588409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.588437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.417 qpair failed and we were unable to recover it. 00:22:22.417 [2024-05-15 01:09:34.588649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.588823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.588851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.417 qpair failed and we were unable to recover it. 00:22:22.417 [2024-05-15 01:09:34.589063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.589248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.589277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.417 qpair failed and we were unable to recover it. 00:22:22.417 [2024-05-15 01:09:34.589496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.589956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.590000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.417 qpair failed and we were unable to recover it. 00:22:22.417 [2024-05-15 01:09:34.590232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.590421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.590445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.417 qpair failed and we were unable to recover it. 00:22:22.417 [2024-05-15 01:09:34.590648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.590856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.590886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.417 qpair failed and we were unable to recover it. 00:22:22.417 [2024-05-15 01:09:34.591121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.591332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.591361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.417 qpair failed and we were unable to recover it. 00:22:22.417 [2024-05-15 01:09:34.591566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.591772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.591799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.417 qpair failed and we were unable to recover it. 00:22:22.417 [2024-05-15 01:09:34.591997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.592222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.592246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.417 qpair failed and we were unable to recover it. 00:22:22.417 [2024-05-15 01:09:34.592442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.592674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.592702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.417 qpair failed and we were unable to recover it. 00:22:22.417 [2024-05-15 01:09:34.592945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.593327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.593390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.417 qpair failed and we were unable to recover it. 00:22:22.417 [2024-05-15 01:09:34.593616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.593810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.593834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.417 qpair failed and we were unable to recover it. 00:22:22.417 [2024-05-15 01:09:34.594025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.594236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.594260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.417 qpair failed and we were unable to recover it. 00:22:22.417 [2024-05-15 01:09:34.594504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.594723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.594752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.417 qpair failed and we were unable to recover it. 00:22:22.417 [2024-05-15 01:09:34.594959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.595265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.595316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.417 qpair failed and we were unable to recover it. 00:22:22.417 [2024-05-15 01:09:34.595604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.595807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.595833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.417 qpair failed and we were unable to recover it. 00:22:22.417 [2024-05-15 01:09:34.596041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.596287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.417 [2024-05-15 01:09:34.596315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.417 qpair failed and we were unable to recover it. 00:22:22.417 [2024-05-15 01:09:34.596533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.596696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.596720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.418 qpair failed and we were unable to recover it. 00:22:22.418 [2024-05-15 01:09:34.596912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.597125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.597154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.418 qpair failed and we were unable to recover it. 00:22:22.418 [2024-05-15 01:09:34.597370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.597545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.597635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.418 qpair failed and we were unable to recover it. 00:22:22.418 [2024-05-15 01:09:34.597882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.598098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.598124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.418 qpair failed and we were unable to recover it. 00:22:22.418 [2024-05-15 01:09:34.598308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.598469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.598510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.418 qpair failed and we were unable to recover it. 00:22:22.418 [2024-05-15 01:09:34.598740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.598966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.598995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.418 qpair failed and we were unable to recover it. 00:22:22.418 [2024-05-15 01:09:34.599214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.599459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.599483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.418 qpair failed and we were unable to recover it. 00:22:22.418 [2024-05-15 01:09:34.599643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.599881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.599909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.418 qpair failed and we were unable to recover it. 00:22:22.418 [2024-05-15 01:09:34.600132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.600444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.600494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.418 qpair failed and we were unable to recover it. 00:22:22.418 [2024-05-15 01:09:34.600707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.600879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.600903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.418 qpair failed and we were unable to recover it. 00:22:22.418 [2024-05-15 01:09:34.601143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.601349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.601375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.418 qpair failed and we were unable to recover it. 00:22:22.418 [2024-05-15 01:09:34.601569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.601766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.601790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.418 qpair failed and we were unable to recover it. 00:22:22.418 [2024-05-15 01:09:34.602034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.602235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.602263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.418 qpair failed and we were unable to recover it. 00:22:22.418 [2024-05-15 01:09:34.602448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.602835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.602885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.418 qpair failed and we were unable to recover it. 00:22:22.418 [2024-05-15 01:09:34.603114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.603323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.603351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.418 qpair failed and we were unable to recover it. 00:22:22.418 [2024-05-15 01:09:34.603524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.603711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.603739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.418 qpair failed and we were unable to recover it. 00:22:22.418 [2024-05-15 01:09:34.603984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.604223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.604247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.418 qpair failed and we were unable to recover it. 00:22:22.418 [2024-05-15 01:09:34.604464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.604678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.604702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.418 qpair failed and we were unable to recover it. 00:22:22.418 [2024-05-15 01:09:34.604875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.605061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.605090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.418 qpair failed and we were unable to recover it. 00:22:22.418 [2024-05-15 01:09:34.605307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.605516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.605540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.418 qpair failed and we were unable to recover it. 00:22:22.418 [2024-05-15 01:09:34.605760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.605971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.605999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.418 qpair failed and we were unable to recover it. 00:22:22.418 [2024-05-15 01:09:34.606176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.606378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.606405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.418 qpair failed and we were unable to recover it. 00:22:22.418 [2024-05-15 01:09:34.606642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.606849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.606876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.418 qpair failed and we were unable to recover it. 00:22:22.418 [2024-05-15 01:09:34.607066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.607272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.607299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.418 qpair failed and we were unable to recover it. 00:22:22.418 [2024-05-15 01:09:34.607536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.607712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.607739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.418 qpair failed and we were unable to recover it. 00:22:22.418 [2024-05-15 01:09:34.607980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.608342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.608400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.418 qpair failed and we were unable to recover it. 00:22:22.418 [2024-05-15 01:09:34.608626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.608830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.608868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.418 qpair failed and we were unable to recover it. 00:22:22.418 [2024-05-15 01:09:34.609119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.609295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.609323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.418 qpair failed and we were unable to recover it. 00:22:22.418 [2024-05-15 01:09:34.609499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.609697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.609725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.418 qpair failed and we were unable to recover it. 00:22:22.418 [2024-05-15 01:09:34.609943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.610156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.418 [2024-05-15 01:09:34.610181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.418 qpair failed and we were unable to recover it. 00:22:22.419 [2024-05-15 01:09:34.610430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.610609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.610632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.419 qpair failed and we were unable to recover it. 00:22:22.419 [2024-05-15 01:09:34.610859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.611069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.611099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.419 qpair failed and we were unable to recover it. 00:22:22.419 [2024-05-15 01:09:34.611331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.611640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.611693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.419 qpair failed and we were unable to recover it. 00:22:22.419 [2024-05-15 01:09:34.611958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.612171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.612196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.419 qpair failed and we were unable to recover it. 00:22:22.419 [2024-05-15 01:09:34.612348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.612512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.612551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.419 qpair failed and we were unable to recover it. 00:22:22.419 [2024-05-15 01:09:34.612751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.612981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.613009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.419 qpair failed and we were unable to recover it. 00:22:22.419 [2024-05-15 01:09:34.613219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.613437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.613464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.419 qpair failed and we were unable to recover it. 00:22:22.419 [2024-05-15 01:09:34.613697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.613939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.613965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.419 qpair failed and we were unable to recover it. 00:22:22.419 [2024-05-15 01:09:34.614176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.614376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.614399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.419 qpair failed and we were unable to recover it. 00:22:22.419 [2024-05-15 01:09:34.614649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.614852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.614879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.419 qpair failed and we were unable to recover it. 00:22:22.419 [2024-05-15 01:09:34.615057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.615257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.615285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.419 qpair failed and we were unable to recover it. 00:22:22.419 [2024-05-15 01:09:34.615482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.615686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.615711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.419 qpair failed and we were unable to recover it. 00:22:22.419 [2024-05-15 01:09:34.615893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.616095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.616121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.419 qpair failed and we were unable to recover it. 00:22:22.419 [2024-05-15 01:09:34.616333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.616643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.616693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.419 qpair failed and we were unable to recover it. 00:22:22.419 [2024-05-15 01:09:34.616901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.617144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.617169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.419 qpair failed and we were unable to recover it. 00:22:22.419 [2024-05-15 01:09:34.617382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.617625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.617650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.419 qpair failed and we were unable to recover it. 00:22:22.419 [2024-05-15 01:09:34.617840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.618084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.618109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.419 qpair failed and we were unable to recover it. 00:22:22.419 [2024-05-15 01:09:34.618318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.618623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.618686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.419 qpair failed and we were unable to recover it. 00:22:22.419 [2024-05-15 01:09:34.618893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.619113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.619138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.419 qpair failed and we were unable to recover it. 00:22:22.419 [2024-05-15 01:09:34.619389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.619600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.619630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.419 qpair failed and we were unable to recover it. 00:22:22.419 [2024-05-15 01:09:34.619862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.620078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.620107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.419 qpair failed and we were unable to recover it. 00:22:22.419 [2024-05-15 01:09:34.620318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.620493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.620517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.419 qpair failed and we were unable to recover it. 00:22:22.419 [2024-05-15 01:09:34.620741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.620946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.620974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.419 qpair failed and we were unable to recover it. 00:22:22.419 [2024-05-15 01:09:34.621209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.621406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.621431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.419 qpair failed and we were unable to recover it. 00:22:22.419 [2024-05-15 01:09:34.621621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.621809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.621833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.419 qpair failed and we were unable to recover it. 00:22:22.419 [2024-05-15 01:09:34.622072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.622274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.622302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.419 qpair failed and we were unable to recover it. 00:22:22.419 [2024-05-15 01:09:34.622512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.622728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.622756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.419 qpair failed and we were unable to recover it. 00:22:22.419 [2024-05-15 01:09:34.622945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.623161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.623185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.419 qpair failed and we were unable to recover it. 00:22:22.419 [2024-05-15 01:09:34.623387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.623597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.623627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.419 qpair failed and we were unable to recover it. 00:22:22.419 [2024-05-15 01:09:34.623810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.624110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.419 [2024-05-15 01:09:34.624160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.420 qpair failed and we were unable to recover it. 00:22:22.420 [2024-05-15 01:09:34.624404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.624729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.624793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.420 qpair failed and we were unable to recover it. 00:22:22.420 [2024-05-15 01:09:34.625007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.625309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.625334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.420 qpair failed and we were unable to recover it. 00:22:22.420 [2024-05-15 01:09:34.625628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.625859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.625884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.420 qpair failed and we were unable to recover it. 00:22:22.420 [2024-05-15 01:09:34.626080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.626294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.626352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.420 qpair failed and we were unable to recover it. 00:22:22.420 [2024-05-15 01:09:34.626557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.626751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.626814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.420 qpair failed and we were unable to recover it. 00:22:22.420 [2024-05-15 01:09:34.627027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.627222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.627246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.420 qpair failed and we were unable to recover it. 00:22:22.420 [2024-05-15 01:09:34.627404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.627616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.627673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.420 qpair failed and we were unable to recover it. 00:22:22.420 [2024-05-15 01:09:34.627858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.628070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.628101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.420 qpair failed and we were unable to recover it. 00:22:22.420 [2024-05-15 01:09:34.628315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.628655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.628702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.420 qpair failed and we were unable to recover it. 00:22:22.420 [2024-05-15 01:09:34.628912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.629092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.629122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.420 qpair failed and we were unable to recover it. 00:22:22.420 [2024-05-15 01:09:34.629340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.629551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.629579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.420 qpair failed and we were unable to recover it. 00:22:22.420 [2024-05-15 01:09:34.629781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.630045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.630071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.420 qpair failed and we were unable to recover it. 00:22:22.420 [2024-05-15 01:09:34.630234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.630448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.630476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.420 qpair failed and we were unable to recover it. 00:22:22.420 [2024-05-15 01:09:34.630693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.630889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.630913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.420 qpair failed and we were unable to recover it. 00:22:22.420 [2024-05-15 01:09:34.631085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.631286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.631309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.420 qpair failed and we were unable to recover it. 00:22:22.420 [2024-05-15 01:09:34.631544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.631737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.631778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.420 qpair failed and we were unable to recover it. 00:22:22.420 [2024-05-15 01:09:34.632004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.632161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.632190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.420 qpair failed and we were unable to recover it. 00:22:22.420 [2024-05-15 01:09:34.632421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.632745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.632791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.420 qpair failed and we were unable to recover it. 00:22:22.420 [2024-05-15 01:09:34.633026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.633266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.633291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.420 qpair failed and we were unable to recover it. 00:22:22.420 [2024-05-15 01:09:34.633531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.633723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.633751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.420 qpair failed and we were unable to recover it. 00:22:22.420 [2024-05-15 01:09:34.633972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.634155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.634183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.420 qpair failed and we were unable to recover it. 00:22:22.420 [2024-05-15 01:09:34.634430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.634847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.634899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.420 qpair failed and we were unable to recover it. 00:22:22.420 [2024-05-15 01:09:34.635115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.635317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.635342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.420 qpair failed and we were unable to recover it. 00:22:22.420 [2024-05-15 01:09:34.635542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.635750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.635777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.420 qpair failed and we were unable to recover it. 00:22:22.420 [2024-05-15 01:09:34.635954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.636138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.420 [2024-05-15 01:09:34.636165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.420 qpair failed and we were unable to recover it. 00:22:22.420 [2024-05-15 01:09:34.636400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.636594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.636619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.421 qpair failed and we were unable to recover it. 00:22:22.421 [2024-05-15 01:09:34.636814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.637061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.637089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.421 qpair failed and we were unable to recover it. 00:22:22.421 [2024-05-15 01:09:34.637328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.637543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.637567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.421 qpair failed and we were unable to recover it. 00:22:22.421 [2024-05-15 01:09:34.637718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.637906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.637955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.421 qpair failed and we were unable to recover it. 00:22:22.421 [2024-05-15 01:09:34.638231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.638411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.638435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.421 qpair failed and we were unable to recover it. 00:22:22.421 [2024-05-15 01:09:34.638650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.638838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.638865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.421 qpair failed and we were unable to recover it. 00:22:22.421 [2024-05-15 01:09:34.639047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.639280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.639308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.421 qpair failed and we were unable to recover it. 00:22:22.421 [2024-05-15 01:09:34.639496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.639737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.639764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.421 qpair failed and we were unable to recover it. 00:22:22.421 [2024-05-15 01:09:34.639997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.640151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.640175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.421 qpair failed and we were unable to recover it. 00:22:22.421 [2024-05-15 01:09:34.640411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.640795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.640855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.421 qpair failed and we were unable to recover it. 00:22:22.421 [2024-05-15 01:09:34.641125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.641367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.641427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.421 qpair failed and we were unable to recover it. 00:22:22.421 [2024-05-15 01:09:34.641631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.641858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.641886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.421 qpair failed and we were unable to recover it. 00:22:22.421 [2024-05-15 01:09:34.642091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.642316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.642343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.421 qpair failed and we were unable to recover it. 00:22:22.421 [2024-05-15 01:09:34.642539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.642744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.642768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.421 qpair failed and we were unable to recover it. 00:22:22.421 [2024-05-15 01:09:34.642965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.643330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.643373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.421 qpair failed and we were unable to recover it. 00:22:22.421 [2024-05-15 01:09:34.643607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.643901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.643980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.421 qpair failed and we were unable to recover it. 00:22:22.421 [2024-05-15 01:09:34.644273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.644467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.644496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.421 qpair failed and we were unable to recover it. 00:22:22.421 [2024-05-15 01:09:34.644703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.644926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.644960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.421 qpair failed and we were unable to recover it. 00:22:22.421 [2024-05-15 01:09:34.645209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.645412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.645442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.421 qpair failed and we were unable to recover it. 00:22:22.421 [2024-05-15 01:09:34.645688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.645887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.645927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.421 qpair failed and we were unable to recover it. 00:22:22.421 [2024-05-15 01:09:34.646129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.646337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.646364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.421 qpair failed and we were unable to recover it. 00:22:22.421 [2024-05-15 01:09:34.646533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.646716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.646744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.421 qpair failed and we were unable to recover it. 00:22:22.421 [2024-05-15 01:09:34.646925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.647121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.647145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.421 qpair failed and we were unable to recover it. 00:22:22.421 [2024-05-15 01:09:34.647356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.647536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.647563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.421 qpair failed and we were unable to recover it. 00:22:22.421 [2024-05-15 01:09:34.647772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.647964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.647994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.421 qpair failed and we were unable to recover it. 00:22:22.421 [2024-05-15 01:09:34.648209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.648508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.648563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.421 qpair failed and we were unable to recover it. 00:22:22.421 [2024-05-15 01:09:34.648775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.648992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.649017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.421 qpair failed and we were unable to recover it. 00:22:22.421 [2024-05-15 01:09:34.649235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.649436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.649463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.421 qpair failed and we were unable to recover it. 00:22:22.421 [2024-05-15 01:09:34.649699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.649883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.649911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.421 qpair failed and we were unable to recover it. 00:22:22.421 [2024-05-15 01:09:34.650155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.650356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.421 [2024-05-15 01:09:34.650383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.421 qpair failed and we were unable to recover it. 00:22:22.422 [2024-05-15 01:09:34.650568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.650759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.650784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.422 qpair failed and we were unable to recover it. 00:22:22.422 [2024-05-15 01:09:34.650997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.651176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.651217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.422 qpair failed and we were unable to recover it. 00:22:22.422 [2024-05-15 01:09:34.651399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.651610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.651635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.422 qpair failed and we were unable to recover it. 00:22:22.422 [2024-05-15 01:09:34.651803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.651967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.651992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.422 qpair failed and we were unable to recover it. 00:22:22.422 [2024-05-15 01:09:34.652179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.652398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.652425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.422 qpair failed and we were unable to recover it. 00:22:22.422 [2024-05-15 01:09:34.652614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.652856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.652881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.422 qpair failed and we were unable to recover it. 00:22:22.422 [2024-05-15 01:09:34.653064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.653276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.653304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.422 qpair failed and we were unable to recover it. 00:22:22.422 [2024-05-15 01:09:34.653516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.653688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.653711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.422 qpair failed and we were unable to recover it. 00:22:22.422 [2024-05-15 01:09:34.653894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.654087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.654114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.422 qpair failed and we were unable to recover it. 00:22:22.422 [2024-05-15 01:09:34.654339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.654580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.654605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.422 qpair failed and we were unable to recover it. 00:22:22.422 [2024-05-15 01:09:34.654792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.654980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.655006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.422 qpair failed and we were unable to recover it. 00:22:22.422 [2024-05-15 01:09:34.655245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.655484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.655507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.422 qpair failed and we were unable to recover it. 00:22:22.422 [2024-05-15 01:09:34.655723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.655941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.655969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.422 qpair failed and we were unable to recover it. 00:22:22.422 [2024-05-15 01:09:34.656149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.656360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.656387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.422 qpair failed and we were unable to recover it. 00:22:22.422 [2024-05-15 01:09:34.656623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.656783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.656823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.422 qpair failed and we were unable to recover it. 00:22:22.422 [2024-05-15 01:09:34.657008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.657168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.657193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.422 qpair failed and we were unable to recover it. 00:22:22.422 [2024-05-15 01:09:34.657399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.657605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.657668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.422 qpair failed and we were unable to recover it. 00:22:22.422 [2024-05-15 01:09:34.657883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.658078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.658103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.422 qpair failed and we were unable to recover it. 00:22:22.422 [2024-05-15 01:09:34.658351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.658562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.658589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.422 qpair failed and we were unable to recover it. 00:22:22.422 [2024-05-15 01:09:34.658761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.658979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.659007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.422 qpair failed and we were unable to recover it. 00:22:22.422 [2024-05-15 01:09:34.659182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.659403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.659430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.422 qpair failed and we were unable to recover it. 00:22:22.422 [2024-05-15 01:09:34.659668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.659864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.659887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.422 qpair failed and we were unable to recover it. 00:22:22.422 [2024-05-15 01:09:34.660106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.660316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.660343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.422 qpair failed and we were unable to recover it. 00:22:22.422 [2024-05-15 01:09:34.660529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.660740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.660764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.422 qpair failed and we were unable to recover it. 00:22:22.422 [2024-05-15 01:09:34.660984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.661163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.661190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.422 qpair failed and we were unable to recover it. 00:22:22.422 [2024-05-15 01:09:34.661403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.661584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.661653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.422 qpair failed and we were unable to recover it. 00:22:22.422 [2024-05-15 01:09:34.661887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.662097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.662122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.422 qpair failed and we were unable to recover it. 00:22:22.422 [2024-05-15 01:09:34.662318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.662499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.662523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.422 qpair failed and we were unable to recover it. 00:22:22.422 [2024-05-15 01:09:34.662709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.662917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.662952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.422 qpair failed and we were unable to recover it. 00:22:22.422 [2024-05-15 01:09:34.663160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.663347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.422 [2024-05-15 01:09:34.663388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.423 qpair failed and we were unable to recover it. 00:22:22.423 [2024-05-15 01:09:34.663616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.663826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.663850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.423 qpair failed and we were unable to recover it. 00:22:22.423 [2024-05-15 01:09:34.664062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.664272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.664301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.423 qpair failed and we were unable to recover it. 00:22:22.423 [2024-05-15 01:09:34.664491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.664711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.664736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.423 qpair failed and we were unable to recover it. 00:22:22.423 [2024-05-15 01:09:34.664976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.665189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.665218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.423 qpair failed and we were unable to recover it. 00:22:22.423 [2024-05-15 01:09:34.665405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.665563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.665603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.423 qpair failed and we were unable to recover it. 00:22:22.423 [2024-05-15 01:09:34.665780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.666043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.666069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.423 qpair failed and we were unable to recover it. 00:22:22.423 [2024-05-15 01:09:34.666276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.666491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.666515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.423 qpair failed and we were unable to recover it. 00:22:22.423 [2024-05-15 01:09:34.666679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.666889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.666916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.423 qpair failed and we were unable to recover it. 00:22:22.423 [2024-05-15 01:09:34.667116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.667289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.667314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.423 qpair failed and we were unable to recover it. 00:22:22.423 [2024-05-15 01:09:34.667478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.667720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.667747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.423 qpair failed and we were unable to recover it. 00:22:22.423 [2024-05-15 01:09:34.667955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.668147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.668174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.423 qpair failed and we were unable to recover it. 00:22:22.423 [2024-05-15 01:09:34.668441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.668635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.668660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.423 qpair failed and we were unable to recover it. 00:22:22.423 [2024-05-15 01:09:34.668871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.669115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.669142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.423 qpair failed and we were unable to recover it. 00:22:22.423 [2024-05-15 01:09:34.669329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.669582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.669633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.423 qpair failed and we were unable to recover it. 00:22:22.423 [2024-05-15 01:09:34.669848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.670036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.670061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.423 qpair failed and we were unable to recover it. 00:22:22.423 [2024-05-15 01:09:34.670272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.670482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.670510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.423 qpair failed and we were unable to recover it. 00:22:22.423 [2024-05-15 01:09:34.670750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.670946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.670988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.423 qpair failed and we were unable to recover it. 00:22:22.423 [2024-05-15 01:09:34.671182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.671410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.671438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.423 qpair failed and we were unable to recover it. 00:22:22.423 [2024-05-15 01:09:34.671675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.671884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.671909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.423 qpair failed and we were unable to recover it. 00:22:22.423 [2024-05-15 01:09:34.672074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.672265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.672290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.423 qpair failed and we were unable to recover it. 00:22:22.423 [2024-05-15 01:09:34.672487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.672695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.672722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.423 qpair failed and we were unable to recover it. 00:22:22.423 [2024-05-15 01:09:34.672904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.673095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.673123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.423 qpair failed and we were unable to recover it. 00:22:22.423 [2024-05-15 01:09:34.673372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.673551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.673578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.423 qpair failed and we were unable to recover it. 00:22:22.423 [2024-05-15 01:09:34.673790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.674000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.674029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.423 qpair failed and we were unable to recover it. 00:22:22.423 [2024-05-15 01:09:34.674240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.674449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.674478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.423 qpair failed and we were unable to recover it. 00:22:22.423 [2024-05-15 01:09:34.674700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.674879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.674908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.423 qpair failed and we were unable to recover it. 00:22:22.423 [2024-05-15 01:09:34.675125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.675332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.675359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.423 qpair failed and we were unable to recover it. 00:22:22.423 [2024-05-15 01:09:34.675572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.675738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.675763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.423 qpair failed and we were unable to recover it. 00:22:22.423 [2024-05-15 01:09:34.675924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.676098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.676123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.423 qpair failed and we were unable to recover it. 00:22:22.423 [2024-05-15 01:09:34.676370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.423 [2024-05-15 01:09:34.676676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.676727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.424 qpair failed and we were unable to recover it. 00:22:22.424 [2024-05-15 01:09:34.676949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.677158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.677186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.424 qpair failed and we were unable to recover it. 00:22:22.424 [2024-05-15 01:09:34.677402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.677607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.677635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.424 qpair failed and we were unable to recover it. 00:22:22.424 [2024-05-15 01:09:34.677849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.678037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.678067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.424 qpair failed and we were unable to recover it. 00:22:22.424 [2024-05-15 01:09:34.678305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.678529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.678556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.424 qpair failed and we were unable to recover it. 00:22:22.424 [2024-05-15 01:09:34.678789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.678973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.679003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.424 qpair failed and we were unable to recover it. 00:22:22.424 [2024-05-15 01:09:34.679196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.679441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.679465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.424 qpair failed and we were unable to recover it. 00:22:22.424 [2024-05-15 01:09:34.679668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.679877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.679904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.424 qpair failed and we were unable to recover it. 00:22:22.424 [2024-05-15 01:09:34.680096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.680275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.680302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.424 qpair failed and we were unable to recover it. 00:22:22.424 [2024-05-15 01:09:34.680494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.680711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.680736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.424 qpair failed and we were unable to recover it. 00:22:22.424 [2024-05-15 01:09:34.680951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.681142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.681166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.424 qpair failed and we were unable to recover it. 00:22:22.424 [2024-05-15 01:09:34.681352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.681524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.681552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.424 qpair failed and we were unable to recover it. 00:22:22.424 [2024-05-15 01:09:34.681760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.681976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.682004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.424 qpair failed and we were unable to recover it. 00:22:22.424 [2024-05-15 01:09:34.682214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.682456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.682484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.424 qpair failed and we were unable to recover it. 00:22:22.424 [2024-05-15 01:09:34.682737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.682950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.682979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.424 qpair failed and we were unable to recover it. 00:22:22.424 [2024-05-15 01:09:34.683196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.683352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.683377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.424 qpair failed and we were unable to recover it. 00:22:22.424 [2024-05-15 01:09:34.683530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.683710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.683735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.424 qpair failed and we were unable to recover it. 00:22:22.424 [2024-05-15 01:09:34.683959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.684171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.684198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.424 qpair failed and we were unable to recover it. 00:22:22.424 [2024-05-15 01:09:34.684400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.684622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.684649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.424 qpair failed and we were unable to recover it. 00:22:22.424 [2024-05-15 01:09:34.684844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.685003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.685028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.424 qpair failed and we were unable to recover it. 00:22:22.424 [2024-05-15 01:09:34.685239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.685436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.685491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.424 qpair failed and we were unable to recover it. 00:22:22.424 [2024-05-15 01:09:34.685676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.685879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.685906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.424 qpair failed and we were unable to recover it. 00:22:22.424 [2024-05-15 01:09:34.686122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.686305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.686333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.424 qpair failed and we were unable to recover it. 00:22:22.424 [2024-05-15 01:09:34.686573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.686794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.686821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.424 qpair failed and we were unable to recover it. 00:22:22.424 [2024-05-15 01:09:34.687026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.687201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.687229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.424 qpair failed and we were unable to recover it. 00:22:22.424 [2024-05-15 01:09:34.687449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.687650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.687676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.424 qpair failed and we were unable to recover it. 00:22:22.424 [2024-05-15 01:09:34.687879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.688077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.688103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.424 qpair failed and we were unable to recover it. 00:22:22.424 [2024-05-15 01:09:34.688348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.688527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.688554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.424 qpair failed and we were unable to recover it. 00:22:22.424 [2024-05-15 01:09:34.688757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.688955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.688985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.424 qpair failed and we were unable to recover it. 00:22:22.424 [2024-05-15 01:09:34.689215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.689398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.424 [2024-05-15 01:09:34.689425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.425 qpair failed and we were unable to recover it. 00:22:22.425 [2024-05-15 01:09:34.689658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.689868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.689895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.425 qpair failed and we were unable to recover it. 00:22:22.425 [2024-05-15 01:09:34.690079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.690268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.690297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.425 qpair failed and we were unable to recover it. 00:22:22.425 [2024-05-15 01:09:34.690488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.690698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.690725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.425 qpair failed and we were unable to recover it. 00:22:22.425 [2024-05-15 01:09:34.690959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.691154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.691179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.425 qpair failed and we were unable to recover it. 00:22:22.425 [2024-05-15 01:09:34.691346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.691501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.691542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.425 qpair failed and we were unable to recover it. 00:22:22.425 [2024-05-15 01:09:34.691733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.691917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.691947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.425 qpair failed and we were unable to recover it. 00:22:22.425 [2024-05-15 01:09:34.692135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.692310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.692337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.425 qpair failed and we were unable to recover it. 00:22:22.425 [2024-05-15 01:09:34.692547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.692758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.692786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.425 qpair failed and we were unable to recover it. 00:22:22.425 [2024-05-15 01:09:34.693025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.693265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.693293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.425 qpair failed and we were unable to recover it. 00:22:22.425 [2024-05-15 01:09:34.693509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.693711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.693737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.425 qpair failed and we were unable to recover it. 00:22:22.425 [2024-05-15 01:09:34.693921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.694156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.694185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.425 qpair failed and we were unable to recover it. 00:22:22.425 [2024-05-15 01:09:34.694398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.694584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.694612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.425 qpair failed and we were unable to recover it. 00:22:22.425 [2024-05-15 01:09:34.694856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.695050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.695075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.425 qpair failed and we were unable to recover it. 00:22:22.425 [2024-05-15 01:09:34.695311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.695530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.695555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.425 qpair failed and we were unable to recover it. 00:22:22.425 [2024-05-15 01:09:34.695713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.695920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.695956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.425 qpair failed and we were unable to recover it. 00:22:22.425 [2024-05-15 01:09:34.696171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.696374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.696402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.425 qpair failed and we were unable to recover it. 00:22:22.425 [2024-05-15 01:09:34.696619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.696780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.696821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.425 qpair failed and we were unable to recover it. 00:22:22.425 [2024-05-15 01:09:34.697034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.697219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.697248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.425 qpair failed and we were unable to recover it. 00:22:22.425 [2024-05-15 01:09:34.697488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.697678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.697703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.425 qpair failed and we were unable to recover it. 00:22:22.425 [2024-05-15 01:09:34.697872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.698056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.698082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.425 qpair failed and we were unable to recover it. 00:22:22.425 [2024-05-15 01:09:34.698272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.698477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.698523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.425 qpair failed and we were unable to recover it. 00:22:22.425 [2024-05-15 01:09:34.698758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.698944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.698987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.425 qpair failed and we were unable to recover it. 00:22:22.425 [2024-05-15 01:09:34.699155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.699365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.699393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.425 qpair failed and we were unable to recover it. 00:22:22.425 [2024-05-15 01:09:34.699593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.699771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.699803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.425 qpair failed and we were unable to recover it. 00:22:22.425 [2024-05-15 01:09:34.700008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.700182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.700211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.425 qpair failed and we were unable to recover it. 00:22:22.425 [2024-05-15 01:09:34.700416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.700626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.425 [2024-05-15 01:09:34.700655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.425 qpair failed and we were unable to recover it. 00:22:22.426 [2024-05-15 01:09:34.700866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.701054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.701082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.426 qpair failed and we were unable to recover it. 00:22:22.426 [2024-05-15 01:09:34.701284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.701466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.701493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.426 qpair failed and we were unable to recover it. 00:22:22.426 [2024-05-15 01:09:34.701705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.701911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.701945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.426 qpair failed and we were unable to recover it. 00:22:22.426 [2024-05-15 01:09:34.702157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.702320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.702345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.426 qpair failed and we were unable to recover it. 00:22:22.426 [2024-05-15 01:09:34.702510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.702680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.702705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.426 qpair failed and we were unable to recover it. 00:22:22.426 [2024-05-15 01:09:34.702913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.703134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.703162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.426 qpair failed and we were unable to recover it. 00:22:22.426 [2024-05-15 01:09:34.703369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.703565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.703590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.426 qpair failed and we were unable to recover it. 00:22:22.426 [2024-05-15 01:09:34.703834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.704069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.704103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.426 qpair failed and we were unable to recover it. 00:22:22.426 [2024-05-15 01:09:34.704312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.704521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.704551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.426 qpair failed and we were unable to recover it. 00:22:22.426 [2024-05-15 01:09:34.704766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.704990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.705020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.426 qpair failed and we were unable to recover it. 00:22:22.426 [2024-05-15 01:09:34.705232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.705442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.705470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.426 qpair failed and we were unable to recover it. 00:22:22.426 [2024-05-15 01:09:34.705680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.705862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.705890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.426 qpair failed and we were unable to recover it. 00:22:22.426 [2024-05-15 01:09:34.706079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.706287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.706317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.426 qpair failed and we were unable to recover it. 00:22:22.426 [2024-05-15 01:09:34.706690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.706896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.706924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.426 qpair failed and we were unable to recover it. 00:22:22.426 [2024-05-15 01:09:34.707168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.707344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.707371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.426 qpair failed and we were unable to recover it. 00:22:22.426 [2024-05-15 01:09:34.707580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.707804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.707831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.426 qpair failed and we were unable to recover it. 00:22:22.426 [2024-05-15 01:09:34.708042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.708252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.708279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.426 qpair failed and we were unable to recover it. 00:22:22.426 [2024-05-15 01:09:34.708457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.708699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.708731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.426 qpair failed and we were unable to recover it. 00:22:22.426 [2024-05-15 01:09:34.708954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.709145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.709170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.426 qpair failed and we were unable to recover it. 00:22:22.426 [2024-05-15 01:09:34.709336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.709523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.709551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.426 qpair failed and we were unable to recover it. 00:22:22.426 [2024-05-15 01:09:34.709777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.709991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.710019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.426 qpair failed and we were unable to recover it. 00:22:22.426 [2024-05-15 01:09:34.710260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.710432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.710459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.426 qpair failed and we were unable to recover it. 00:22:22.426 [2024-05-15 01:09:34.710698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.710878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.710906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.426 qpair failed and we were unable to recover it. 00:22:22.426 [2024-05-15 01:09:34.711112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.711296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.711323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.426 qpair failed and we were unable to recover it. 00:22:22.426 [2024-05-15 01:09:34.711509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.711709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.711737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.426 qpair failed and we were unable to recover it. 00:22:22.426 [2024-05-15 01:09:34.711948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.712108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.712133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.426 qpair failed and we were unable to recover it. 00:22:22.426 [2024-05-15 01:09:34.712305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.712522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.712549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.426 qpair failed and we were unable to recover it. 00:22:22.426 [2024-05-15 01:09:34.712767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.712975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.713008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.426 qpair failed and we were unable to recover it. 00:22:22.426 [2024-05-15 01:09:34.713221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.713403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.713432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.426 qpair failed and we were unable to recover it. 00:22:22.426 [2024-05-15 01:09:34.713648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.426 [2024-05-15 01:09:34.713812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.713837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.427 qpair failed and we were unable to recover it. 00:22:22.427 [2024-05-15 01:09:34.714053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.714263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.714290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.427 qpair failed and we were unable to recover it. 00:22:22.427 [2024-05-15 01:09:34.714482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.714663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.714692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.427 qpair failed and we were unable to recover it. 00:22:22.427 [2024-05-15 01:09:34.714902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.715085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.715110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.427 qpair failed and we were unable to recover it. 00:22:22.427 [2024-05-15 01:09:34.715352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.715584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.715610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.427 qpair failed and we were unable to recover it. 00:22:22.427 [2024-05-15 01:09:34.715816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.716006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.716035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.427 qpair failed and we were unable to recover it. 00:22:22.427 [2024-05-15 01:09:34.716280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.716441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.716466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.427 qpair failed and we were unable to recover it. 00:22:22.427 [2024-05-15 01:09:34.716708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.716913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.716956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.427 qpair failed and we were unable to recover it. 00:22:22.427 [2024-05-15 01:09:34.717154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.717442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.717495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.427 qpair failed and we were unable to recover it. 00:22:22.427 [2024-05-15 01:09:34.717682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.717860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.717889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.427 qpair failed and we were unable to recover it. 00:22:22.427 [2024-05-15 01:09:34.718147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.718315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.718340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.427 qpair failed and we were unable to recover it. 00:22:22.427 [2024-05-15 01:09:34.718500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.718768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.718797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.427 qpair failed and we were unable to recover it. 00:22:22.427 [2024-05-15 01:09:34.719008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.719223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.719247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.427 qpair failed and we were unable to recover it. 00:22:22.427 [2024-05-15 01:09:34.719440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.719646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.719674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.427 qpair failed and we were unable to recover it. 00:22:22.427 [2024-05-15 01:09:34.719882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.720064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.720093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.427 qpair failed and we were unable to recover it. 00:22:22.427 [2024-05-15 01:09:34.720268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.720541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.720566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.427 qpair failed and we were unable to recover it. 00:22:22.427 [2024-05-15 01:09:34.720780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.721055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.721080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.427 qpair failed and we were unable to recover it. 00:22:22.427 [2024-05-15 01:09:34.721300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.721505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.721531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.427 qpair failed and we were unable to recover it. 00:22:22.427 [2024-05-15 01:09:34.721724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.721927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.721963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.427 qpair failed and we were unable to recover it. 00:22:22.427 [2024-05-15 01:09:34.722179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.722385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.722413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.427 qpair failed and we were unable to recover it. 00:22:22.427 [2024-05-15 01:09:34.722623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.722827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.722854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.427 qpair failed and we were unable to recover it. 00:22:22.427 [2024-05-15 01:09:34.723068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.723225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.723249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.427 qpair failed and we were unable to recover it. 00:22:22.427 [2024-05-15 01:09:34.723414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.723593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.723622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.427 qpair failed and we were unable to recover it. 00:22:22.427 [2024-05-15 01:09:34.723827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.724012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.724039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.427 qpair failed and we were unable to recover it. 00:22:22.427 [2024-05-15 01:09:34.724280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.724541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.724586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.427 qpair failed and we were unable to recover it. 00:22:22.427 [2024-05-15 01:09:34.724798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.724986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.725017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.427 qpair failed and we were unable to recover it. 00:22:22.427 [2024-05-15 01:09:34.725224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.725402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.725429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.427 qpair failed and we were unable to recover it. 00:22:22.427 [2024-05-15 01:09:34.725622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.725839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.725864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.427 qpair failed and we were unable to recover it. 00:22:22.427 [2024-05-15 01:09:34.726027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.726215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.726240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.427 qpair failed and we were unable to recover it. 00:22:22.427 [2024-05-15 01:09:34.726456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.726670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.427 [2024-05-15 01:09:34.726698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.427 qpair failed and we were unable to recover it. 00:22:22.427 [2024-05-15 01:09:34.726910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.727109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.727133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.428 qpair failed and we were unable to recover it. 00:22:22.428 [2024-05-15 01:09:34.727344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.727549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.727576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.428 qpair failed and we were unable to recover it. 00:22:22.428 [2024-05-15 01:09:34.727758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.727940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.727967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.428 qpair failed and we were unable to recover it. 00:22:22.428 [2024-05-15 01:09:34.728148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.728331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.728358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.428 qpair failed and we were unable to recover it. 00:22:22.428 [2024-05-15 01:09:34.728567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.728758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.728782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.428 qpair failed and we were unable to recover it. 00:22:22.428 [2024-05-15 01:09:34.728968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.729180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.729209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.428 qpair failed and we were unable to recover it. 00:22:22.428 [2024-05-15 01:09:34.729422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.729600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.729628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.428 qpair failed and we were unable to recover it. 00:22:22.428 [2024-05-15 01:09:34.729807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.730017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.730045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.428 qpair failed and we were unable to recover it. 00:22:22.428 [2024-05-15 01:09:34.730228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.730439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.730467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.428 qpair failed and we were unable to recover it. 00:22:22.428 [2024-05-15 01:09:34.730692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.730896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.730925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.428 qpair failed and we were unable to recover it. 00:22:22.428 [2024-05-15 01:09:34.731176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.731384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.731413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.428 qpair failed and we were unable to recover it. 00:22:22.428 [2024-05-15 01:09:34.731632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.731789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.731813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.428 qpair failed and we were unable to recover it. 00:22:22.428 [2024-05-15 01:09:34.732016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.732206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.732234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.428 qpair failed and we were unable to recover it. 00:22:22.428 [2024-05-15 01:09:34.732480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.732660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.732687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.428 qpair failed and we were unable to recover it. 00:22:22.428 [2024-05-15 01:09:34.732896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.733145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.733170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.428 qpair failed and we were unable to recover it. 00:22:22.428 [2024-05-15 01:09:34.733383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.733627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.733651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.428 qpair failed and we were unable to recover it. 00:22:22.428 [2024-05-15 01:09:34.733815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.733997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.734027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.428 qpair failed and we were unable to recover it. 00:22:22.428 [2024-05-15 01:09:34.734231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.734433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.734459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.428 qpair failed and we were unable to recover it. 00:22:22.428 [2024-05-15 01:09:34.734650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.734817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.734859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.428 qpair failed and we were unable to recover it. 00:22:22.428 [2024-05-15 01:09:34.735040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.735247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.735276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.428 qpair failed and we were unable to recover it. 00:22:22.428 [2024-05-15 01:09:34.735460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.735624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.735663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.428 qpair failed and we were unable to recover it. 00:22:22.428 [2024-05-15 01:09:34.735874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.736091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.736117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.428 qpair failed and we were unable to recover it. 00:22:22.428 [2024-05-15 01:09:34.736311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.736501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.736528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.428 qpair failed and we were unable to recover it. 00:22:22.428 [2024-05-15 01:09:34.736733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.736939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.736967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.428 qpair failed and we were unable to recover it. 00:22:22.428 [2024-05-15 01:09:34.737205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.737448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.737475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.428 qpair failed and we were unable to recover it. 00:22:22.428 [2024-05-15 01:09:34.737686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.737905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.737945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.428 qpair failed and we were unable to recover it. 00:22:22.428 [2024-05-15 01:09:34.738106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.738340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.738368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.428 qpair failed and we were unable to recover it. 00:22:22.428 [2024-05-15 01:09:34.738593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.738798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.738825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.428 qpair failed and we were unable to recover it. 00:22:22.428 [2024-05-15 01:09:34.739014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.739195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.739236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.428 qpair failed and we were unable to recover it. 00:22:22.428 [2024-05-15 01:09:34.739425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.739637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.428 [2024-05-15 01:09:34.739662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.428 qpair failed and we were unable to recover it. 00:22:22.428 [2024-05-15 01:09:34.739871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.740061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.740090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.429 qpair failed and we were unable to recover it. 00:22:22.429 [2024-05-15 01:09:34.740304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.740547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.740575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.429 qpair failed and we were unable to recover it. 00:22:22.429 [2024-05-15 01:09:34.740790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.741033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.741061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.429 qpair failed and we were unable to recover it. 00:22:22.429 [2024-05-15 01:09:34.741243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.741453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.741477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.429 qpair failed and we were unable to recover it. 00:22:22.429 [2024-05-15 01:09:34.741663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.741877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.741905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.429 qpair failed and we were unable to recover it. 00:22:22.429 [2024-05-15 01:09:34.742128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.742291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.742316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.429 qpair failed and we were unable to recover it. 00:22:22.429 [2024-05-15 01:09:34.742504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.742688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.742712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.429 qpair failed and we were unable to recover it. 00:22:22.429 [2024-05-15 01:09:34.742940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.743126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.743153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.429 qpair failed and we were unable to recover it. 00:22:22.429 [2024-05-15 01:09:34.743370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.743590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.743615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.429 qpair failed and we were unable to recover it. 00:22:22.429 [2024-05-15 01:09:34.743839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.744026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.744055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.429 qpair failed and we were unable to recover it. 00:22:22.429 [2024-05-15 01:09:34.744299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.744476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.744502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.429 qpair failed and we were unable to recover it. 00:22:22.429 [2024-05-15 01:09:34.744681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.744896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.744923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.429 qpair failed and we were unable to recover it. 00:22:22.429 [2024-05-15 01:09:34.745129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.745339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.745368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.429 qpair failed and we were unable to recover it. 00:22:22.429 [2024-05-15 01:09:34.745587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.745818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.745845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.429 qpair failed and we were unable to recover it. 00:22:22.429 [2024-05-15 01:09:34.746042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.746254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.746282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.429 qpair failed and we were unable to recover it. 00:22:22.429 [2024-05-15 01:09:34.746577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.746815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.746840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.429 qpair failed and we were unable to recover it. 00:22:22.429 [2024-05-15 01:09:34.747036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.747241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.747269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.429 qpair failed and we were unable to recover it. 00:22:22.429 [2024-05-15 01:09:34.747507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.747756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.747808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.429 qpair failed and we were unable to recover it. 00:22:22.429 [2024-05-15 01:09:34.748035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.748218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.748246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.429 qpair failed and we were unable to recover it. 00:22:22.429 [2024-05-15 01:09:34.748428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.748633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.748661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.429 qpair failed and we were unable to recover it. 00:22:22.429 [2024-05-15 01:09:34.748874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.749088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.749117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.429 qpair failed and we were unable to recover it. 00:22:22.429 [2024-05-15 01:09:34.749297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.749502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.749530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.429 qpair failed and we were unable to recover it. 00:22:22.429 [2024-05-15 01:09:34.749738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.749952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.749980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.429 qpair failed and we were unable to recover it. 00:22:22.429 [2024-05-15 01:09:34.750221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.750462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.750515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.429 qpair failed and we were unable to recover it. 00:22:22.429 [2024-05-15 01:09:34.750726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.750936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.750964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.429 qpair failed and we were unable to recover it. 00:22:22.429 [2024-05-15 01:09:34.751192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.429 [2024-05-15 01:09:34.751382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.751407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.430 qpair failed and we were unable to recover it. 00:22:22.430 [2024-05-15 01:09:34.751599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.751844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.751872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.430 qpair failed and we were unable to recover it. 00:22:22.430 [2024-05-15 01:09:34.752077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.752259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.752286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.430 qpair failed and we were unable to recover it. 00:22:22.430 [2024-05-15 01:09:34.752528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.752713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.752738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.430 qpair failed and we were unable to recover it. 00:22:22.430 [2024-05-15 01:09:34.752983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.753203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.753228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.430 qpair failed and we were unable to recover it. 00:22:22.430 [2024-05-15 01:09:34.753392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.753554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.753578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.430 qpair failed and we were unable to recover it. 00:22:22.430 [2024-05-15 01:09:34.753773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.754011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.754037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.430 qpair failed and we were unable to recover it. 00:22:22.430 [2024-05-15 01:09:34.754248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.754504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.754547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.430 qpair failed and we were unable to recover it. 00:22:22.430 [2024-05-15 01:09:34.754754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.754938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.754965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.430 qpair failed and we were unable to recover it. 00:22:22.430 [2024-05-15 01:09:34.755177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.755385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.755413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.430 qpair failed and we were unable to recover it. 00:22:22.430 [2024-05-15 01:09:34.755612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.755826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.755852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.430 qpair failed and we were unable to recover it. 00:22:22.430 [2024-05-15 01:09:34.756022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.756231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.756258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.430 qpair failed and we were unable to recover it. 00:22:22.430 [2024-05-15 01:09:34.756472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.756672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.756699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.430 qpair failed and we were unable to recover it. 00:22:22.430 [2024-05-15 01:09:34.756910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.757086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.757112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.430 qpair failed and we were unable to recover it. 00:22:22.430 [2024-05-15 01:09:34.757281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.757489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.757540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.430 qpair failed and we were unable to recover it. 00:22:22.430 [2024-05-15 01:09:34.757722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.757940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.757967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.430 qpair failed and we were unable to recover it. 00:22:22.430 [2024-05-15 01:09:34.758210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.758415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.758443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.430 qpair failed and we were unable to recover it. 00:22:22.430 [2024-05-15 01:09:34.758653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.758859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.758886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.430 qpair failed and we were unable to recover it. 00:22:22.430 [2024-05-15 01:09:34.759102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.759253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.759278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.430 qpair failed and we were unable to recover it. 00:22:22.430 [2024-05-15 01:09:34.759484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.759670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.759698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.430 qpair failed and we were unable to recover it. 00:22:22.430 [2024-05-15 01:09:34.759869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.760076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.760104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.430 qpair failed and we were unable to recover it. 00:22:22.430 [2024-05-15 01:09:34.760309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.760493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.760518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.430 qpair failed and we were unable to recover it. 00:22:22.430 [2024-05-15 01:09:34.760682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.760908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.760949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.430 qpair failed and we were unable to recover it. 00:22:22.430 [2024-05-15 01:09:34.761148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.761382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.761409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.430 qpair failed and we were unable to recover it. 00:22:22.430 [2024-05-15 01:09:34.761619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.761859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.761886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.430 qpair failed and we were unable to recover it. 00:22:22.430 [2024-05-15 01:09:34.762128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.762364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.762391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.430 qpair failed and we were unable to recover it. 00:22:22.430 [2024-05-15 01:09:34.762686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.762899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.762927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.430 qpair failed and we were unable to recover it. 00:22:22.430 [2024-05-15 01:09:34.763143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.763328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.763356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.430 qpair failed and we were unable to recover it. 00:22:22.430 [2024-05-15 01:09:34.763563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.763768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.763796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.430 qpair failed and we were unable to recover it. 00:22:22.430 [2024-05-15 01:09:34.764000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.764191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.430 [2024-05-15 01:09:34.764219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.430 qpair failed and we were unable to recover it. 00:22:22.430 [2024-05-15 01:09:34.764474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.764727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.764754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.431 qpair failed and we were unable to recover it. 00:22:22.431 [2024-05-15 01:09:34.765005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.765201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.765225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.431 qpair failed and we were unable to recover it. 00:22:22.431 [2024-05-15 01:09:34.765419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.765600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.765625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.431 qpair failed and we were unable to recover it. 00:22:22.431 [2024-05-15 01:09:34.765816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.766003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.766032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.431 qpair failed and we were unable to recover it. 00:22:22.431 [2024-05-15 01:09:34.766264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.766497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.766547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.431 qpair failed and we were unable to recover it. 00:22:22.431 [2024-05-15 01:09:34.766750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.766935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.766964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.431 qpair failed and we were unable to recover it. 00:22:22.431 [2024-05-15 01:09:34.767149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.767358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.767387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.431 qpair failed and we were unable to recover it. 00:22:22.431 [2024-05-15 01:09:34.767599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.767801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.767829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.431 qpair failed and we were unable to recover it. 00:22:22.431 [2024-05-15 01:09:34.768041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.768252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.768279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.431 qpair failed and we were unable to recover it. 00:22:22.431 [2024-05-15 01:09:34.768489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.768693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.768720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.431 qpair failed and we were unable to recover it. 00:22:22.431 [2024-05-15 01:09:34.768946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.769156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.769183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.431 qpair failed and we were unable to recover it. 00:22:22.431 [2024-05-15 01:09:34.769482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.769702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.769753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.431 qpair failed and we were unable to recover it. 00:22:22.431 [2024-05-15 01:09:34.770053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.770375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.770421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.431 qpair failed and we were unable to recover it. 00:22:22.431 [2024-05-15 01:09:34.770601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.770812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.770837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.431 qpair failed and we were unable to recover it. 00:22:22.431 [2024-05-15 01:09:34.771120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.771329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.771361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.431 qpair failed and we were unable to recover it. 00:22:22.431 [2024-05-15 01:09:34.771560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.771801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.771866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.431 qpair failed and we were unable to recover it. 00:22:22.431 [2024-05-15 01:09:34.772124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.772308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.772332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.431 qpair failed and we were unable to recover it. 00:22:22.431 [2024-05-15 01:09:34.772547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.772885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.772955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.431 qpair failed and we were unable to recover it. 00:22:22.431 [2024-05-15 01:09:34.773164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.773432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.773491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.431 qpair failed and we were unable to recover it. 00:22:22.431 [2024-05-15 01:09:34.773736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.773978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.774007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.431 qpair failed and we were unable to recover it. 00:22:22.431 [2024-05-15 01:09:34.774270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.774469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.774494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.431 qpair failed and we were unable to recover it. 00:22:22.431 [2024-05-15 01:09:34.774661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.774869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.774895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.431 qpair failed and we were unable to recover it. 00:22:22.431 [2024-05-15 01:09:34.775104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.775325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.775353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.431 qpair failed and we were unable to recover it. 00:22:22.431 [2024-05-15 01:09:34.775598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.775811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.775838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.431 qpair failed and we were unable to recover it. 00:22:22.431 [2024-05-15 01:09:34.776055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.776234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.776261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.431 qpair failed and we were unable to recover it. 00:22:22.431 [2024-05-15 01:09:34.776460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.776647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.776675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.431 qpair failed and we were unable to recover it. 00:22:22.431 [2024-05-15 01:09:34.776920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.777135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.777163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.431 qpair failed and we were unable to recover it. 00:22:22.431 [2024-05-15 01:09:34.777370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.777635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.777663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.431 qpair failed and we were unable to recover it. 00:22:22.431 [2024-05-15 01:09:34.777866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.778076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.778105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.431 qpair failed and we were unable to recover it. 00:22:22.431 [2024-05-15 01:09:34.778283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.778463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.431 [2024-05-15 01:09:34.778491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.431 qpair failed and we were unable to recover it. 00:22:22.432 [2024-05-15 01:09:34.778725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.778914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.778944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.432 qpair failed and we were unable to recover it. 00:22:22.432 [2024-05-15 01:09:34.779140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.779351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.779378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.432 qpair failed and we were unable to recover it. 00:22:22.432 [2024-05-15 01:09:34.779586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.779761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.779785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.432 qpair failed and we were unable to recover it. 00:22:22.432 [2024-05-15 01:09:34.779993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.780182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.780210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.432 qpair failed and we were unable to recover it. 00:22:22.432 [2024-05-15 01:09:34.780450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.780654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.780686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.432 qpair failed and we were unable to recover it. 00:22:22.432 [2024-05-15 01:09:34.780863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.781190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.781219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.432 qpair failed and we were unable to recover it. 00:22:22.432 [2024-05-15 01:09:34.781564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.781812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.781839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.432 qpair failed and we were unable to recover it. 00:22:22.432 [2024-05-15 01:09:34.782085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.782271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.782295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.432 qpair failed and we were unable to recover it. 00:22:22.432 [2024-05-15 01:09:34.782535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.782812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.782858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.432 qpair failed and we were unable to recover it. 00:22:22.432 [2024-05-15 01:09:34.783050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.783233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.783263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.432 qpair failed and we were unable to recover it. 00:22:22.432 [2024-05-15 01:09:34.783550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.783758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.783787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.432 qpair failed and we were unable to recover it. 00:22:22.432 [2024-05-15 01:09:34.784001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.784210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.784237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.432 qpair failed and we were unable to recover it. 00:22:22.432 [2024-05-15 01:09:34.784473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.784704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.784730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.432 qpair failed and we were unable to recover it. 00:22:22.432 [2024-05-15 01:09:34.784950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.785136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.785164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.432 qpair failed and we were unable to recover it. 00:22:22.432 [2024-05-15 01:09:34.785381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.785621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.785676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.432 qpair failed and we were unable to recover it. 00:22:22.432 [2024-05-15 01:09:34.785895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.786143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.786172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.432 qpair failed and we were unable to recover it. 00:22:22.432 [2024-05-15 01:09:34.786352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.786605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.786634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.432 qpair failed and we were unable to recover it. 00:22:22.432 [2024-05-15 01:09:34.786865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.787056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.787086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.432 qpair failed and we were unable to recover it. 00:22:22.432 [2024-05-15 01:09:34.787277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.787511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.787538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.432 qpair failed and we were unable to recover it. 00:22:22.432 [2024-05-15 01:09:34.787759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.787974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.788000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.432 qpair failed and we were unable to recover it. 00:22:22.432 [2024-05-15 01:09:34.788213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.788422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.788450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.432 qpair failed and we were unable to recover it. 00:22:22.432 [2024-05-15 01:09:34.788642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.788818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.788843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.432 qpair failed and we were unable to recover it. 00:22:22.432 [2024-05-15 01:09:34.789013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.789180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.789205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.432 qpair failed and we were unable to recover it. 00:22:22.432 [2024-05-15 01:09:34.789375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.789583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.789608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.432 qpair failed and we were unable to recover it. 00:22:22.432 [2024-05-15 01:09:34.789774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.789972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.790001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.432 qpair failed and we were unable to recover it. 00:22:22.432 [2024-05-15 01:09:34.790198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.790409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.790436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.432 qpair failed and we were unable to recover it. 00:22:22.432 [2024-05-15 01:09:34.790621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.790871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.790896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.432 qpair failed and we were unable to recover it. 00:22:22.432 [2024-05-15 01:09:34.791125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.791316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.791343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.432 qpair failed and we were unable to recover it. 00:22:22.432 [2024-05-15 01:09:34.791579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.791815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.432 [2024-05-15 01:09:34.791843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.432 qpair failed and we were unable to recover it. 00:22:22.717 [2024-05-15 01:09:34.792060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.717 [2024-05-15 01:09:34.792303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.717 [2024-05-15 01:09:34.792331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.717 qpair failed and we were unable to recover it. 00:22:22.717 [2024-05-15 01:09:34.792540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.717 [2024-05-15 01:09:34.792754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.717 [2024-05-15 01:09:34.792781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.717 qpair failed and we were unable to recover it. 00:22:22.717 [2024-05-15 01:09:34.792969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.717 [2024-05-15 01:09:34.793183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.717 [2024-05-15 01:09:34.793223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.717 qpair failed and we were unable to recover it. 00:22:22.717 [2024-05-15 01:09:34.793443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.717 [2024-05-15 01:09:34.793651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.717 [2024-05-15 01:09:34.793678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.717 qpair failed and we were unable to recover it. 00:22:22.717 [2024-05-15 01:09:34.793882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.717 [2024-05-15 01:09:34.794055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.717 [2024-05-15 01:09:34.794079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.717 qpair failed and we were unable to recover it. 00:22:22.717 [2024-05-15 01:09:34.794271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.717 [2024-05-15 01:09:34.794492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.717 [2024-05-15 01:09:34.794519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.717 qpair failed and we were unable to recover it. 00:22:22.717 [2024-05-15 01:09:34.794757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.717 [2024-05-15 01:09:34.794971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.717 [2024-05-15 01:09:34.794996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.717 qpair failed and we were unable to recover it. 00:22:22.717 [2024-05-15 01:09:34.795164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.717 [2024-05-15 01:09:34.795488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.717 [2024-05-15 01:09:34.795513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.717 qpair failed and we were unable to recover it. 00:22:22.717 [2024-05-15 01:09:34.795681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.717 [2024-05-15 01:09:34.795901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.717 [2024-05-15 01:09:34.795939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.717 qpair failed and we were unable to recover it. 00:22:22.717 [2024-05-15 01:09:34.796128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.717 [2024-05-15 01:09:34.796326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.717 [2024-05-15 01:09:34.796352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.717 qpair failed and we were unable to recover it. 00:22:22.717 [2024-05-15 01:09:34.796557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.717 [2024-05-15 01:09:34.796763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.717 [2024-05-15 01:09:34.796790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.717 qpair failed and we were unable to recover it. 00:22:22.717 [2024-05-15 01:09:34.797002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.717 [2024-05-15 01:09:34.797234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.717 [2024-05-15 01:09:34.797261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.717 qpair failed and we were unable to recover it. 00:22:22.717 [2024-05-15 01:09:34.797473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.717 [2024-05-15 01:09:34.797663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.717 [2024-05-15 01:09:34.797688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.717 qpair failed and we were unable to recover it. 00:22:22.718 [2024-05-15 01:09:34.797876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.798093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.798122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.718 qpair failed and we were unable to recover it. 00:22:22.718 [2024-05-15 01:09:34.798338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.798528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.798551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.718 qpair failed and we were unable to recover it. 00:22:22.718 [2024-05-15 01:09:34.798739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.798902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.798927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.718 qpair failed and we were unable to recover it. 00:22:22.718 [2024-05-15 01:09:34.799128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.799338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.799365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.718 qpair failed and we were unable to recover it. 00:22:22.718 [2024-05-15 01:09:34.799580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.799733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.799756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.718 qpair failed and we were unable to recover it. 00:22:22.718 [2024-05-15 01:09:34.799985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.800164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.800193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.718 qpair failed and we were unable to recover it. 00:22:22.718 [2024-05-15 01:09:34.800418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.800615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.800643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.718 qpair failed and we were unable to recover it. 00:22:22.718 [2024-05-15 01:09:34.800830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.801023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.801049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.718 qpair failed and we were unable to recover it. 00:22:22.718 [2024-05-15 01:09:34.801265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.801472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.801498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.718 qpair failed and we were unable to recover it. 00:22:22.718 [2024-05-15 01:09:34.801736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.801918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.801957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.718 qpair failed and we were unable to recover it. 00:22:22.718 [2024-05-15 01:09:34.802165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.802371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.802398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.718 qpair failed and we were unable to recover it. 00:22:22.718 [2024-05-15 01:09:34.802634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.802866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.802894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.718 qpair failed and we were unable to recover it. 00:22:22.718 [2024-05-15 01:09:34.803089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.803253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.803296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.718 qpair failed and we were unable to recover it. 00:22:22.718 [2024-05-15 01:09:34.803477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.803658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.803687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.718 qpair failed and we were unable to recover it. 00:22:22.718 [2024-05-15 01:09:34.803895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.804079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.804108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.718 qpair failed and we were unable to recover it. 00:22:22.718 [2024-05-15 01:09:34.804285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.804500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.804527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.718 qpair failed and we were unable to recover it. 00:22:22.718 [2024-05-15 01:09:34.804731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.804913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.804950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.718 qpair failed and we were unable to recover it. 00:22:22.718 [2024-05-15 01:09:34.805158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.805403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.805430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.718 qpair failed and we were unable to recover it. 00:22:22.718 [2024-05-15 01:09:34.805665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.805874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.805897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.718 qpair failed and we were unable to recover it. 00:22:22.718 [2024-05-15 01:09:34.806098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.806289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.806319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.718 qpair failed and we were unable to recover it. 00:22:22.718 [2024-05-15 01:09:34.806503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.806718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.806747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.718 qpair failed and we were unable to recover it. 00:22:22.718 [2024-05-15 01:09:34.806960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.807208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.807234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.718 qpair failed and we were unable to recover it. 00:22:22.718 [2024-05-15 01:09:34.807417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.807629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.807655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.718 qpair failed and we were unable to recover it. 00:22:22.718 [2024-05-15 01:09:34.807870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.808063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.808089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.718 qpair failed and we were unable to recover it. 00:22:22.718 [2024-05-15 01:09:34.808306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.808539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.808566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.718 qpair failed and we were unable to recover it. 00:22:22.718 [2024-05-15 01:09:34.808811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.809051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.809080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.718 qpair failed and we were unable to recover it. 00:22:22.718 [2024-05-15 01:09:34.809290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.809480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.809510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.718 qpair failed and we were unable to recover it. 00:22:22.718 [2024-05-15 01:09:34.809750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.809970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.809994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.718 qpair failed and we were unable to recover it. 00:22:22.718 [2024-05-15 01:09:34.810170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.810379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.810406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.718 qpair failed and we were unable to recover it. 00:22:22.718 [2024-05-15 01:09:34.810646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.718 [2024-05-15 01:09:34.810822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.810850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.719 qpair failed and we were unable to recover it. 00:22:22.719 [2024-05-15 01:09:34.811032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.811241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.811268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.719 qpair failed and we were unable to recover it. 00:22:22.719 [2024-05-15 01:09:34.811480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.811674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.811699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.719 qpair failed and we were unable to recover it. 00:22:22.719 [2024-05-15 01:09:34.811913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.812125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.812150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.719 qpair failed and we were unable to recover it. 00:22:22.719 [2024-05-15 01:09:34.812369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.812564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.812591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.719 qpair failed and we were unable to recover it. 00:22:22.719 [2024-05-15 01:09:34.812830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.813001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.813028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.719 qpair failed and we were unable to recover it. 00:22:22.719 [2024-05-15 01:09:34.813238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.813452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.813479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.719 qpair failed and we were unable to recover it. 00:22:22.719 [2024-05-15 01:09:34.813716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.813955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.813984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.719 qpair failed and we were unable to recover it. 00:22:22.719 [2024-05-15 01:09:34.814190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.814408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.814436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.719 qpair failed and we were unable to recover it. 00:22:22.719 [2024-05-15 01:09:34.814672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.814880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.814907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.719 qpair failed and we were unable to recover it. 00:22:22.719 [2024-05-15 01:09:34.815152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.815323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.815350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.719 qpair failed and we were unable to recover it. 00:22:22.719 [2024-05-15 01:09:34.815566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.815862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.815922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.719 qpair failed and we were unable to recover it. 00:22:22.719 [2024-05-15 01:09:34.816148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.816357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.816385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.719 qpair failed and we were unable to recover it. 00:22:22.719 [2024-05-15 01:09:34.816619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.816796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.816822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.719 qpair failed and we were unable to recover it. 00:22:22.719 [2024-05-15 01:09:34.817072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.817307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.817333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.719 qpair failed and we were unable to recover it. 00:22:22.719 [2024-05-15 01:09:34.817542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.817728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.817755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.719 qpair failed and we were unable to recover it. 00:22:22.719 [2024-05-15 01:09:34.817964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.818195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.818223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.719 qpair failed and we were unable to recover it. 00:22:22.719 [2024-05-15 01:09:34.818468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.818706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.818769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.719 qpair failed and we were unable to recover it. 00:22:22.719 [2024-05-15 01:09:34.818979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.819160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.819187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.719 qpair failed and we were unable to recover it. 00:22:22.719 [2024-05-15 01:09:34.819405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.819590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.819614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.719 qpair failed and we were unable to recover it. 00:22:22.719 [2024-05-15 01:09:34.819797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.820004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.820032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.719 qpair failed and we were unable to recover it. 00:22:22.719 [2024-05-15 01:09:34.820249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.820569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.820620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.719 qpair failed and we were unable to recover it. 00:22:22.719 [2024-05-15 01:09:34.820833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.821018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.821044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.719 qpair failed and we were unable to recover it. 00:22:22.719 [2024-05-15 01:09:34.821284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.821582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.821612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.719 qpair failed and we were unable to recover it. 00:22:22.719 [2024-05-15 01:09:34.821806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.822044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.822072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.719 qpair failed and we were unable to recover it. 00:22:22.719 [2024-05-15 01:09:34.822250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.822467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.822495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.719 qpair failed and we were unable to recover it. 00:22:22.719 [2024-05-15 01:09:34.822700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.822912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.822949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.719 qpair failed and we were unable to recover it. 00:22:22.719 [2024-05-15 01:09:34.823157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.823382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.823440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.719 qpair failed and we were unable to recover it. 00:22:22.719 [2024-05-15 01:09:34.823654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.823855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.823882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.719 qpair failed and we were unable to recover it. 00:22:22.719 [2024-05-15 01:09:34.824128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.824369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.719 [2024-05-15 01:09:34.824393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.719 qpair failed and we were unable to recover it. 00:22:22.720 [2024-05-15 01:09:34.824558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.824719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.824744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.720 qpair failed and we were unable to recover it. 00:22:22.720 [2024-05-15 01:09:34.824958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.825230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.825258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.720 qpair failed and we were unable to recover it. 00:22:22.720 [2024-05-15 01:09:34.825453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.825691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.825718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.720 qpair failed and we were unable to recover it. 00:22:22.720 [2024-05-15 01:09:34.825928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.826137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.826164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.720 qpair failed and we were unable to recover it. 00:22:22.720 [2024-05-15 01:09:34.826376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.826587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.826614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.720 qpair failed and we were unable to recover it. 00:22:22.720 [2024-05-15 01:09:34.826803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.827018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.827043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.720 qpair failed and we were unable to recover it. 00:22:22.720 [2024-05-15 01:09:34.827236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.827451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.827474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.720 qpair failed and we were unable to recover it. 00:22:22.720 [2024-05-15 01:09:34.827691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.827925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.827959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.720 qpair failed and we were unable to recover it. 00:22:22.720 [2024-05-15 01:09:34.828167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.828363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.828387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.720 qpair failed and we were unable to recover it. 00:22:22.720 [2024-05-15 01:09:34.828586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.828759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.828785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.720 qpair failed and we were unable to recover it. 00:22:22.720 [2024-05-15 01:09:34.828985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.829248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.829277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.720 qpair failed and we were unable to recover it. 00:22:22.720 [2024-05-15 01:09:34.829483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.829694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.829722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.720 qpair failed and we were unable to recover it. 00:22:22.720 [2024-05-15 01:09:34.829963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.830177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.830205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.720 qpair failed and we were unable to recover it. 00:22:22.720 [2024-05-15 01:09:34.830441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.830708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.830760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.720 qpair failed and we were unable to recover it. 00:22:22.720 [2024-05-15 01:09:34.830994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.831250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.831275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.720 qpair failed and we were unable to recover it. 00:22:22.720 [2024-05-15 01:09:34.831465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.831705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.831731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.720 qpair failed and we were unable to recover it. 00:22:22.720 [2024-05-15 01:09:34.831892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.832121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.832149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.720 qpair failed and we were unable to recover it. 00:22:22.720 [2024-05-15 01:09:34.832360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.832569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.832596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.720 qpair failed and we were unable to recover it. 00:22:22.720 [2024-05-15 01:09:34.832849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.833070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.833098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.720 qpair failed and we were unable to recover it. 00:22:22.720 [2024-05-15 01:09:34.833342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.833628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.833687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.720 qpair failed and we were unable to recover it. 00:22:22.720 [2024-05-15 01:09:34.833900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.834120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.834149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.720 qpair failed and we were unable to recover it. 00:22:22.720 [2024-05-15 01:09:34.834358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.834609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.834635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.720 qpair failed and we were unable to recover it. 00:22:22.720 [2024-05-15 01:09:34.834854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.835073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.835102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.720 qpair failed and we were unable to recover it. 00:22:22.720 [2024-05-15 01:09:34.835306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.835518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.835545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.720 qpair failed and we were unable to recover it. 00:22:22.720 [2024-05-15 01:09:34.835812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.835996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.836024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.720 qpair failed and we were unable to recover it. 00:22:22.720 [2024-05-15 01:09:34.836237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.836423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.836484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.720 qpair failed and we were unable to recover it. 00:22:22.720 [2024-05-15 01:09:34.836724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.836915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.836947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.720 qpair failed and we were unable to recover it. 00:22:22.720 [2024-05-15 01:09:34.837191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.837396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.837463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.720 qpair failed and we were unable to recover it. 00:22:22.720 [2024-05-15 01:09:34.837650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.837853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.837881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.720 qpair failed and we were unable to recover it. 00:22:22.720 [2024-05-15 01:09:34.838080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.720 [2024-05-15 01:09:34.838290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.838319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.721 qpair failed and we were unable to recover it. 00:22:22.721 [2024-05-15 01:09:34.838499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.838710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.838736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.721 qpair failed and we were unable to recover it. 00:22:22.721 [2024-05-15 01:09:34.838897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.839094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.839119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.721 qpair failed and we were unable to recover it. 00:22:22.721 [2024-05-15 01:09:34.839297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.839510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.839537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.721 qpair failed and we were unable to recover it. 00:22:22.721 [2024-05-15 01:09:34.839748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.839960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.839991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.721 qpair failed and we were unable to recover it. 00:22:22.721 [2024-05-15 01:09:34.840189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.840421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.840446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.721 qpair failed and we were unable to recover it. 00:22:22.721 [2024-05-15 01:09:34.840641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.840879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.840907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.721 qpair failed and we were unable to recover it. 00:22:22.721 [2024-05-15 01:09:34.841129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.841464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.841523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.721 qpair failed and we were unable to recover it. 00:22:22.721 [2024-05-15 01:09:34.841971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.842248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.842276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.721 qpair failed and we were unable to recover it. 00:22:22.721 [2024-05-15 01:09:34.842496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.842834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.842885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.721 qpair failed and we were unable to recover it. 00:22:22.721 [2024-05-15 01:09:34.843094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.843310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.843338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.721 qpair failed and we were unable to recover it. 00:22:22.721 [2024-05-15 01:09:34.843549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.843706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.843730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.721 qpair failed and we were unable to recover it. 00:22:22.721 [2024-05-15 01:09:34.843944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.844193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.844221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.721 qpair failed and we were unable to recover it. 00:22:22.721 [2024-05-15 01:09:34.844466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.844758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.844816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.721 qpair failed and we were unable to recover it. 00:22:22.721 [2024-05-15 01:09:34.845006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.845225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.845255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.721 qpair failed and we were unable to recover it. 00:22:22.721 [2024-05-15 01:09:34.845488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.845774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.845832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.721 qpair failed and we were unable to recover it. 00:22:22.721 [2024-05-15 01:09:34.846056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.846243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.846267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.721 qpair failed and we were unable to recover it. 00:22:22.721 [2024-05-15 01:09:34.846482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.846812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.846875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.721 qpair failed and we were unable to recover it. 00:22:22.721 [2024-05-15 01:09:34.847127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.847336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.847364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.721 qpair failed and we were unable to recover it. 00:22:22.721 [2024-05-15 01:09:34.847582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.847789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.847817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.721 qpair failed and we were unable to recover it. 00:22:22.721 [2024-05-15 01:09:34.848050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.848283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.848333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.721 qpair failed and we were unable to recover it. 00:22:22.721 [2024-05-15 01:09:34.848527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.848777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.848801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.721 qpair failed and we were unable to recover it. 00:22:22.721 [2024-05-15 01:09:34.848957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.849187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.849227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.721 qpair failed and we were unable to recover it. 00:22:22.721 [2024-05-15 01:09:34.849442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.849663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.849688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.721 qpair failed and we were unable to recover it. 00:22:22.721 [2024-05-15 01:09:34.849873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.850090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.850119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.721 qpair failed and we were unable to recover it. 00:22:22.721 [2024-05-15 01:09:34.850302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.850508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.850539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.721 qpair failed and we were unable to recover it. 00:22:22.721 [2024-05-15 01:09:34.850768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.721 [2024-05-15 01:09:34.850991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.851016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.722 qpair failed and we were unable to recover it. 00:22:22.722 [2024-05-15 01:09:34.851241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.851485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.851536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.722 qpair failed and we were unable to recover it. 00:22:22.722 [2024-05-15 01:09:34.851720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.851940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.851974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.722 qpair failed and we were unable to recover it. 00:22:22.722 [2024-05-15 01:09:34.852214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.852582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.852635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.722 qpair failed and we were unable to recover it. 00:22:22.722 [2024-05-15 01:09:34.852845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.853092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.853118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.722 qpair failed and we were unable to recover it. 00:22:22.722 [2024-05-15 01:09:34.853335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.853573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.853600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.722 qpair failed and we were unable to recover it. 00:22:22.722 [2024-05-15 01:09:34.853788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.853989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.854018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.722 qpair failed and we were unable to recover it. 00:22:22.722 [2024-05-15 01:09:34.854232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.854469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.854497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.722 qpair failed and we were unable to recover it. 00:22:22.722 [2024-05-15 01:09:34.854732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.854947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.854974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.722 qpair failed and we were unable to recover it. 00:22:22.722 [2024-05-15 01:09:34.855172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.855449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.855501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.722 qpair failed and we were unable to recover it. 00:22:22.722 [2024-05-15 01:09:34.855806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.856033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.856059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.722 qpair failed and we were unable to recover it. 00:22:22.722 [2024-05-15 01:09:34.856254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.856408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.856433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.722 qpair failed and we were unable to recover it. 00:22:22.722 [2024-05-15 01:09:34.856673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.856864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.856888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.722 qpair failed and we were unable to recover it. 00:22:22.722 [2024-05-15 01:09:34.857072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.857283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.857313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.722 qpair failed and we were unable to recover it. 00:22:22.722 [2024-05-15 01:09:34.857524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.857729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.857757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.722 qpair failed and we were unable to recover it. 00:22:22.722 [2024-05-15 01:09:34.857970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.858238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.858296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.722 qpair failed and we were unable to recover it. 00:22:22.722 [2024-05-15 01:09:34.858483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.858704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.858743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.722 qpair failed and we were unable to recover it. 00:22:22.722 [2024-05-15 01:09:34.858968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.859213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.859240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.722 qpair failed and we were unable to recover it. 00:22:22.722 [2024-05-15 01:09:34.859587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.860005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.860034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.722 qpair failed and we were unable to recover it. 00:22:22.722 [2024-05-15 01:09:34.860250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.860459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.722 [2024-05-15 01:09:34.860484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.723 qpair failed and we were unable to recover it. 00:22:22.723 [2024-05-15 01:09:34.860710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.860912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.860946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.723 qpair failed and we were unable to recover it. 00:22:22.723 [2024-05-15 01:09:34.861140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.861348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.861376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.723 qpair failed and we were unable to recover it. 00:22:22.723 [2024-05-15 01:09:34.861616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.861800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.861823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.723 qpair failed and we were unable to recover it. 00:22:22.723 [2024-05-15 01:09:34.862003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.862170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.862195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.723 qpair failed and we were unable to recover it. 00:22:22.723 [2024-05-15 01:09:34.862444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.862654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.862686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.723 qpair failed and we were unable to recover it. 00:22:22.723 [2024-05-15 01:09:34.862927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.863163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.863191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.723 qpair failed and we were unable to recover it. 00:22:22.723 [2024-05-15 01:09:34.863403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.863581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.863605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.723 qpair failed and we were unable to recover it. 00:22:22.723 [2024-05-15 01:09:34.863821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.864029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.864056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.723 qpair failed and we were unable to recover it. 00:22:22.723 [2024-05-15 01:09:34.864264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.864470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.864495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.723 qpair failed and we were unable to recover it. 00:22:22.723 [2024-05-15 01:09:34.864660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.864902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.864936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.723 qpair failed and we were unable to recover it. 00:22:22.723 [2024-05-15 01:09:34.865189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.865411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.865436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.723 qpair failed and we were unable to recover it. 00:22:22.723 [2024-05-15 01:09:34.865791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.866103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.866129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.723 qpair failed and we were unable to recover it. 00:22:22.723 [2024-05-15 01:09:34.866352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.866523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.866550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.723 qpair failed and we were unable to recover it. 00:22:22.723 [2024-05-15 01:09:34.866761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.866943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.866969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.723 qpair failed and we were unable to recover it. 00:22:22.723 [2024-05-15 01:09:34.867163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.867351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.867379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.723 qpair failed and we were unable to recover it. 00:22:22.723 [2024-05-15 01:09:34.867593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.867969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.867998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.723 qpair failed and we were unable to recover it. 00:22:22.723 [2024-05-15 01:09:34.868208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.868440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.868492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.723 qpair failed and we were unable to recover it. 00:22:22.723 [2024-05-15 01:09:34.868818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.869047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.869075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.723 qpair failed and we were unable to recover it. 00:22:22.723 [2024-05-15 01:09:34.869360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.869666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.869689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.723 qpair failed and we were unable to recover it. 00:22:22.723 [2024-05-15 01:09:34.869921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.870169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.870197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.723 qpair failed and we were unable to recover it. 00:22:22.723 [2024-05-15 01:09:34.870439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.870744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.870771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.723 qpair failed and we were unable to recover it. 00:22:22.723 [2024-05-15 01:09:34.870982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.871232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.871259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.723 qpair failed and we were unable to recover it. 00:22:22.723 [2024-05-15 01:09:34.871659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.871983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.872011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.723 qpair failed and we were unable to recover it. 00:22:22.723 [2024-05-15 01:09:34.872246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.872551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.872606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.723 qpair failed and we were unable to recover it. 00:22:22.723 [2024-05-15 01:09:34.872851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.873100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.873128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.723 qpair failed and we were unable to recover it. 00:22:22.723 [2024-05-15 01:09:34.873342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.873582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.873609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.723 qpair failed and we were unable to recover it. 00:22:22.723 [2024-05-15 01:09:34.873944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.874176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.874204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.723 qpair failed and we were unable to recover it. 00:22:22.723 [2024-05-15 01:09:34.874411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.874617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.723 [2024-05-15 01:09:34.874645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.723 qpair failed and we were unable to recover it. 00:22:22.724 [2024-05-15 01:09:34.874851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.875060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.875089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.724 qpair failed and we were unable to recover it. 00:22:22.724 [2024-05-15 01:09:34.875279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.875537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.875586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.724 qpair failed and we were unable to recover it. 00:22:22.724 [2024-05-15 01:09:34.875800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.876016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.876041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.724 qpair failed and we were unable to recover it. 00:22:22.724 [2024-05-15 01:09:34.876258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.876637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.876699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.724 qpair failed and we were unable to recover it. 00:22:22.724 [2024-05-15 01:09:34.876941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.877120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.877148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.724 qpair failed and we were unable to recover it. 00:22:22.724 [2024-05-15 01:09:34.877326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.877589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.877640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.724 qpair failed and we were unable to recover it. 00:22:22.724 [2024-05-15 01:09:34.877816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.878021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.878049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.724 qpair failed and we were unable to recover it. 00:22:22.724 [2024-05-15 01:09:34.878246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.878569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.878620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.724 qpair failed and we were unable to recover it. 00:22:22.724 [2024-05-15 01:09:34.878832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.879040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.879070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.724 qpair failed and we were unable to recover it. 00:22:22.724 [2024-05-15 01:09:34.879273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.879489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.879516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.724 qpair failed and we were unable to recover it. 00:22:22.724 [2024-05-15 01:09:34.879716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.879920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.879951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.724 qpair failed and we were unable to recover it. 00:22:22.724 [2024-05-15 01:09:34.880203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.880393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.880422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.724 qpair failed and we were unable to recover it. 00:22:22.724 [2024-05-15 01:09:34.880642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.880849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.880877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.724 qpair failed and we were unable to recover it. 00:22:22.724 [2024-05-15 01:09:34.881086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.881264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.881292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.724 qpair failed and we were unable to recover it. 00:22:22.724 [2024-05-15 01:09:34.881503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.881709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.881737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.724 qpair failed and we were unable to recover it. 00:22:22.724 [2024-05-15 01:09:34.881951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.882141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.882168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.724 qpair failed and we were unable to recover it. 00:22:22.724 [2024-05-15 01:09:34.882379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.882577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.882602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.724 qpair failed and we were unable to recover it. 00:22:22.724 [2024-05-15 01:09:34.882793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.883076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.883105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.724 qpair failed and we were unable to recover it. 00:22:22.724 [2024-05-15 01:09:34.883418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.883721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.883749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.724 qpair failed and we were unable to recover it. 00:22:22.724 [2024-05-15 01:09:34.883985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.884197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.884236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.724 qpair failed and we were unable to recover it. 00:22:22.724 [2024-05-15 01:09:34.884403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.884614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.884642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.724 qpair failed and we were unable to recover it. 00:22:22.724 [2024-05-15 01:09:34.884867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.885091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.885120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.724 qpair failed and we were unable to recover it. 00:22:22.724 [2024-05-15 01:09:34.885338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.885514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.885542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.724 qpair failed and we were unable to recover it. 00:22:22.724 [2024-05-15 01:09:34.885749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.885923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.885957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.724 qpair failed and we were unable to recover it. 00:22:22.724 [2024-05-15 01:09:34.886165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.886403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.886431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.724 qpair failed and we were unable to recover it. 00:22:22.724 [2024-05-15 01:09:34.886645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.724 [2024-05-15 01:09:34.886827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.886857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.725 qpair failed and we were unable to recover it. 00:22:22.725 [2024-05-15 01:09:34.887066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.887252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.887281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.725 qpair failed and we were unable to recover it. 00:22:22.725 [2024-05-15 01:09:34.887518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.887770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.887798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.725 qpair failed and we were unable to recover it. 00:22:22.725 [2024-05-15 01:09:34.888040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.888220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.888243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.725 qpair failed and we were unable to recover it. 00:22:22.725 [2024-05-15 01:09:34.888398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.888558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.888584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.725 qpair failed and we were unable to recover it. 00:22:22.725 [2024-05-15 01:09:34.888810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.889044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.889073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.725 qpair failed and we were unable to recover it. 00:22:22.725 [2024-05-15 01:09:34.889281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.889498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.889524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.725 qpair failed and we were unable to recover it. 00:22:22.725 [2024-05-15 01:09:34.889747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.889978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.890007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.725 qpair failed and we were unable to recover it. 00:22:22.725 [2024-05-15 01:09:34.890223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.890410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.890435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.725 qpair failed and we were unable to recover it. 00:22:22.725 [2024-05-15 01:09:34.890636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.890869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.890897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.725 qpair failed and we were unable to recover it. 00:22:22.725 [2024-05-15 01:09:34.891117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.891327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.891354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.725 qpair failed and we were unable to recover it. 00:22:22.725 [2024-05-15 01:09:34.891529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.891739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.891763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.725 qpair failed and we were unable to recover it. 00:22:22.725 [2024-05-15 01:09:34.892014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.892221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.892250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.725 qpair failed and we were unable to recover it. 00:22:22.725 [2024-05-15 01:09:34.892523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.892807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.892836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.725 qpair failed and we were unable to recover it. 00:22:22.725 [2024-05-15 01:09:34.893082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.893464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.893523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.725 qpair failed and we were unable to recover it. 00:22:22.725 [2024-05-15 01:09:34.893767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.894001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.894027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.725 qpair failed and we were unable to recover it. 00:22:22.725 [2024-05-15 01:09:34.894230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.894408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.894436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.725 qpair failed and we were unable to recover it. 00:22:22.725 [2024-05-15 01:09:34.894653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.894838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.894866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.725 qpair failed and we were unable to recover it. 00:22:22.725 [2024-05-15 01:09:34.895079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.895296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.895323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.725 qpair failed and we were unable to recover it. 00:22:22.725 [2024-05-15 01:09:34.895534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.895770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.895797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.725 qpair failed and we were unable to recover it. 00:22:22.725 [2024-05-15 01:09:34.896016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.896228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.896257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.725 qpair failed and we were unable to recover it. 00:22:22.725 [2024-05-15 01:09:34.896467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.896678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.896706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.725 qpair failed and we were unable to recover it. 00:22:22.725 [2024-05-15 01:09:34.896889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.897112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.897141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.725 qpair failed and we were unable to recover it. 00:22:22.725 [2024-05-15 01:09:34.897364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.897544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.897571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.725 qpair failed and we were unable to recover it. 00:22:22.725 [2024-05-15 01:09:34.897774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.897998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.898024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.725 qpair failed and we were unable to recover it. 00:22:22.725 [2024-05-15 01:09:34.898225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.898423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.725 [2024-05-15 01:09:34.898450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.725 qpair failed and we were unable to recover it. 00:22:22.726 [2024-05-15 01:09:34.898663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.898880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.898906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.726 qpair failed and we were unable to recover it. 00:22:22.726 [2024-05-15 01:09:34.899104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.899273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.899314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.726 qpair failed and we were unable to recover it. 00:22:22.726 [2024-05-15 01:09:34.899524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.899730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.899757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.726 qpair failed and we were unable to recover it. 00:22:22.726 [2024-05-15 01:09:34.900006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.900166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.900191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.726 qpair failed and we were unable to recover it. 00:22:22.726 [2024-05-15 01:09:34.900371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.900579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.900639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.726 qpair failed and we were unable to recover it. 00:22:22.726 [2024-05-15 01:09:34.900876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.901064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.901092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.726 qpair failed and we were unable to recover it. 00:22:22.726 [2024-05-15 01:09:34.901282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.901485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.901512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.726 qpair failed and we were unable to recover it. 00:22:22.726 [2024-05-15 01:09:34.901729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.901919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.901954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.726 qpair failed and we were unable to recover it. 00:22:22.726 [2024-05-15 01:09:34.902182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.902371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.902401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.726 qpair failed and we were unable to recover it. 00:22:22.726 [2024-05-15 01:09:34.902619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.902797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.902823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.726 qpair failed and we were unable to recover it. 00:22:22.726 [2024-05-15 01:09:34.903041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.903234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.903258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.726 qpair failed and we were unable to recover it. 00:22:22.726 [2024-05-15 01:09:34.903528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.903871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.903917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.726 qpair failed and we were unable to recover it. 00:22:22.726 [2024-05-15 01:09:34.904120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.904286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.904312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.726 qpair failed and we were unable to recover it. 00:22:22.726 [2024-05-15 01:09:34.904547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.904722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.904751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.726 qpair failed and we were unable to recover it. 00:22:22.726 [2024-05-15 01:09:34.904989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.905157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.905181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.726 qpair failed and we were unable to recover it. 00:22:22.726 [2024-05-15 01:09:34.905355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.905677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.905735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.726 qpair failed and we were unable to recover it. 00:22:22.726 [2024-05-15 01:09:34.905942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.906128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.906155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.726 qpair failed and we were unable to recover it. 00:22:22.726 [2024-05-15 01:09:34.906388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.906584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.906608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.726 qpair failed and we were unable to recover it. 00:22:22.726 [2024-05-15 01:09:34.906826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.907017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.907052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.726 qpair failed and we were unable to recover it. 00:22:22.726 [2024-05-15 01:09:34.907264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.907478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.907506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.726 qpair failed and we were unable to recover it. 00:22:22.726 [2024-05-15 01:09:34.907683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.907868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.907896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.726 qpair failed and we were unable to recover it. 00:22:22.726 [2024-05-15 01:09:34.908085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.908305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.908332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.726 qpair failed and we were unable to recover it. 00:22:22.726 [2024-05-15 01:09:34.908523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.908697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.908741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.726 qpair failed and we were unable to recover it. 00:22:22.726 [2024-05-15 01:09:34.908946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.909160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.909189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.726 qpair failed and we were unable to recover it. 00:22:22.726 [2024-05-15 01:09:34.909410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.909573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.909615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.726 qpair failed and we were unable to recover it. 00:22:22.726 [2024-05-15 01:09:34.909838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.726 [2024-05-15 01:09:34.909998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.910024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.727 qpair failed and we were unable to recover it. 00:22:22.727 [2024-05-15 01:09:34.910215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.910398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.910426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.727 qpair failed and we were unable to recover it. 00:22:22.727 [2024-05-15 01:09:34.910603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.910843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.910883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.727 qpair failed and we were unable to recover it. 00:22:22.727 [2024-05-15 01:09:34.911075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.911289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.911318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.727 qpair failed and we were unable to recover it. 00:22:22.727 [2024-05-15 01:09:34.911527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.911708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.911735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.727 qpair failed and we were unable to recover it. 00:22:22.727 [2024-05-15 01:09:34.911964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.912146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.912170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.727 qpair failed and we were unable to recover it. 00:22:22.727 [2024-05-15 01:09:34.912402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.912617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.912644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.727 qpair failed and we were unable to recover it. 00:22:22.727 [2024-05-15 01:09:34.912849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.913075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.913101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.727 qpair failed and we were unable to recover it. 00:22:22.727 [2024-05-15 01:09:34.913313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.913487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.913515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.727 qpair failed and we were unable to recover it. 00:22:22.727 [2024-05-15 01:09:34.913701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.913916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.913952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.727 qpair failed and we were unable to recover it. 00:22:22.727 [2024-05-15 01:09:34.914161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.914376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.914404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.727 qpair failed and we were unable to recover it. 00:22:22.727 [2024-05-15 01:09:34.914616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.914807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.914834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.727 qpair failed and we were unable to recover it. 00:22:22.727 [2024-05-15 01:09:34.915056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.915274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.915298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.727 qpair failed and we were unable to recover it. 00:22:22.727 [2024-05-15 01:09:34.915502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.915686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.915713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.727 qpair failed and we were unable to recover it. 00:22:22.727 [2024-05-15 01:09:34.915906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.916103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.916128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.727 qpair failed and we were unable to recover it. 00:22:22.727 [2024-05-15 01:09:34.916299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.916556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.916608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.727 qpair failed and we were unable to recover it. 00:22:22.727 [2024-05-15 01:09:34.916817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.917035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.917066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.727 qpair failed and we were unable to recover it. 00:22:22.727 [2024-05-15 01:09:34.917260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.917440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.917463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.727 qpair failed and we were unable to recover it. 00:22:22.727 [2024-05-15 01:09:34.917685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.917850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.727 [2024-05-15 01:09:34.917892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.727 qpair failed and we were unable to recover it. 00:22:22.727 [2024-05-15 01:09:34.918092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.918335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.918393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.728 qpair failed and we were unable to recover it. 00:22:22.728 [2024-05-15 01:09:34.918565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.918779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.918803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.728 qpair failed and we were unable to recover it. 00:22:22.728 [2024-05-15 01:09:34.919027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.919229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.919255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.728 qpair failed and we were unable to recover it. 00:22:22.728 [2024-05-15 01:09:34.919444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.919651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.919681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.728 qpair failed and we were unable to recover it. 00:22:22.728 [2024-05-15 01:09:34.919857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.920052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.920077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.728 qpair failed and we were unable to recover it. 00:22:22.728 [2024-05-15 01:09:34.920293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.920504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.920531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.728 qpair failed and we were unable to recover it. 00:22:22.728 [2024-05-15 01:09:34.920853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.921107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.921132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.728 qpair failed and we were unable to recover it. 00:22:22.728 [2024-05-15 01:09:34.921353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.921560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.921593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.728 qpair failed and we were unable to recover it. 00:22:22.728 [2024-05-15 01:09:34.921801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.922003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.922029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.728 qpair failed and we were unable to recover it. 00:22:22.728 [2024-05-15 01:09:34.922261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.922445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.922471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.728 qpair failed and we were unable to recover it. 00:22:22.728 [2024-05-15 01:09:34.922660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.922922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.922959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.728 qpair failed and we were unable to recover it. 00:22:22.728 [2024-05-15 01:09:34.923170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.923377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.923406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.728 qpair failed and we were unable to recover it. 00:22:22.728 [2024-05-15 01:09:34.923620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.923837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.923862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.728 qpair failed and we were unable to recover it. 00:22:22.728 [2024-05-15 01:09:34.924070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.924253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.924279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.728 qpair failed and we were unable to recover it. 00:22:22.728 [2024-05-15 01:09:34.924466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.924629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.924655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.728 qpair failed and we were unable to recover it. 00:22:22.728 [2024-05-15 01:09:34.924845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.925084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.925109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.728 qpair failed and we were unable to recover it. 00:22:22.728 [2024-05-15 01:09:34.925322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.925507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.925534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.728 qpair failed and we were unable to recover it. 00:22:22.728 [2024-05-15 01:09:34.925721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.925936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.925982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.728 qpair failed and we were unable to recover it. 00:22:22.728 [2024-05-15 01:09:34.926171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.926383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.926410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.728 qpair failed and we were unable to recover it. 00:22:22.728 [2024-05-15 01:09:34.926647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.926825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.926854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.728 qpair failed and we were unable to recover it. 00:22:22.728 [2024-05-15 01:09:34.927072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.927252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.927281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.728 qpair failed and we were unable to recover it. 00:22:22.728 [2024-05-15 01:09:34.927463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.927670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.927697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.728 qpair failed and we were unable to recover it. 00:22:22.728 [2024-05-15 01:09:34.927954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.928145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.928169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.728 qpair failed and we were unable to recover it. 00:22:22.728 [2024-05-15 01:09:34.928379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.928615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.928642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.728 qpair failed and we were unable to recover it. 00:22:22.728 [2024-05-15 01:09:34.928850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.929065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.929091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.728 qpair failed and we were unable to recover it. 00:22:22.728 [2024-05-15 01:09:34.929283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.929490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.929516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.728 qpair failed and we were unable to recover it. 00:22:22.728 [2024-05-15 01:09:34.929760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.930009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.930035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.728 qpair failed and we were unable to recover it. 00:22:22.728 [2024-05-15 01:09:34.930260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.930444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.728 [2024-05-15 01:09:34.930475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.729 qpair failed and we were unable to recover it. 00:22:22.729 [2024-05-15 01:09:34.930663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.930877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.930906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.729 qpair failed and we were unable to recover it. 00:22:22.729 [2024-05-15 01:09:34.931127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.931293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.931317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.729 qpair failed and we were unable to recover it. 00:22:22.729 [2024-05-15 01:09:34.931473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.931631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.931656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.729 qpair failed and we were unable to recover it. 00:22:22.729 [2024-05-15 01:09:34.931848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.932046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.932072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.729 qpair failed and we were unable to recover it. 00:22:22.729 [2024-05-15 01:09:34.932286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.932499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.932524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.729 qpair failed and we were unable to recover it. 00:22:22.729 [2024-05-15 01:09:34.932765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.932985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.933010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.729 qpair failed and we were unable to recover it. 00:22:22.729 [2024-05-15 01:09:34.933187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.933358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.933381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.729 qpair failed and we were unable to recover it. 00:22:22.729 [2024-05-15 01:09:34.933548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.933765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.933793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.729 qpair failed and we were unable to recover it. 00:22:22.729 [2024-05-15 01:09:34.934029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.934189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.934232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.729 qpair failed and we were unable to recover it. 00:22:22.729 [2024-05-15 01:09:34.934442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.934651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.934675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.729 qpair failed and we were unable to recover it. 00:22:22.729 [2024-05-15 01:09:34.934869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.935083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.935111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.729 qpair failed and we were unable to recover it. 00:22:22.729 [2024-05-15 01:09:34.935293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.935473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.935500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.729 qpair failed and we were unable to recover it. 00:22:22.729 [2024-05-15 01:09:34.935704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.935945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.935975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.729 qpair failed and we were unable to recover it. 00:22:22.729 [2024-05-15 01:09:34.936167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.936404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.936430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.729 qpair failed and we were unable to recover it. 00:22:22.729 [2024-05-15 01:09:34.936639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.936849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.936876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.729 qpair failed and we were unable to recover it. 00:22:22.729 [2024-05-15 01:09:34.937057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.937233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.937261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.729 qpair failed and we were unable to recover it. 00:22:22.729 [2024-05-15 01:09:34.937443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.937684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.937723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.729 qpair failed and we were unable to recover it. 00:22:22.729 [2024-05-15 01:09:34.937938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.938148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.938172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.729 qpair failed and we were unable to recover it. 00:22:22.729 [2024-05-15 01:09:34.938365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.938544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.938574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.729 qpair failed and we were unable to recover it. 00:22:22.729 [2024-05-15 01:09:34.938781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.938956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.938983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.729 qpair failed and we were unable to recover it. 00:22:22.729 [2024-05-15 01:09:34.939169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.939413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.939438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.729 qpair failed and we were unable to recover it. 00:22:22.729 [2024-05-15 01:09:34.939628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.939833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.939858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.729 qpair failed and we were unable to recover it. 00:22:22.729 [2024-05-15 01:09:34.940051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.729 [2024-05-15 01:09:34.940289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.940315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.730 qpair failed and we were unable to recover it. 00:22:22.730 [2024-05-15 01:09:34.940543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.940748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.940774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.730 qpair failed and we were unable to recover it. 00:22:22.730 [2024-05-15 01:09:34.940995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.941187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.941228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.730 qpair failed and we were unable to recover it. 00:22:22.730 [2024-05-15 01:09:34.941413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.941620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.941650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.730 qpair failed and we were unable to recover it. 00:22:22.730 [2024-05-15 01:09:34.941888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.942083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.942108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.730 qpair failed and we were unable to recover it. 00:22:22.730 [2024-05-15 01:09:34.942348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.942558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.942582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.730 qpair failed and we were unable to recover it. 00:22:22.730 [2024-05-15 01:09:34.942826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.943003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.943032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.730 qpair failed and we were unable to recover it. 00:22:22.730 [2024-05-15 01:09:34.943287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.943496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.943524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.730 qpair failed and we were unable to recover it. 00:22:22.730 [2024-05-15 01:09:34.943711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.943885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.943912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.730 qpair failed and we were unable to recover it. 00:22:22.730 [2024-05-15 01:09:34.944135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.944328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.944355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.730 qpair failed and we were unable to recover it. 00:22:22.730 [2024-05-15 01:09:34.944576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.944810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.944837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.730 qpair failed and we were unable to recover it. 00:22:22.730 [2024-05-15 01:09:34.945023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.945224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.945252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.730 qpair failed and we were unable to recover it. 00:22:22.730 [2024-05-15 01:09:34.945490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.945700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.945728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.730 qpair failed and we were unable to recover it. 00:22:22.730 [2024-05-15 01:09:34.945906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.946122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.946148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.730 qpair failed and we were unable to recover it. 00:22:22.730 [2024-05-15 01:09:34.946316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.946529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.946553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.730 qpair failed and we were unable to recover it. 00:22:22.730 [2024-05-15 01:09:34.946807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.947013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.947041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.730 qpair failed and we were unable to recover it. 00:22:22.730 [2024-05-15 01:09:34.947228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.947428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.947453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.730 qpair failed and we were unable to recover it. 00:22:22.730 [2024-05-15 01:09:34.947640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.947844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.947872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.730 qpair failed and we were unable to recover it. 00:22:22.730 [2024-05-15 01:09:34.948063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.948274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.948302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.730 qpair failed and we were unable to recover it. 00:22:22.730 [2024-05-15 01:09:34.948540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.948790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.948817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.730 qpair failed and we were unable to recover it. 00:22:22.730 [2024-05-15 01:09:34.949013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.949220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.949248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.730 qpair failed and we were unable to recover it. 00:22:22.730 [2024-05-15 01:09:34.949454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.949636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.949663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.730 qpair failed and we were unable to recover it. 00:22:22.730 [2024-05-15 01:09:34.949839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.950058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.950083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.730 qpair failed and we were unable to recover it. 00:22:22.730 [2024-05-15 01:09:34.950295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.950471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.950498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.730 qpair failed and we were unable to recover it. 00:22:22.730 [2024-05-15 01:09:34.950708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.950886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.950914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.730 qpair failed and we were unable to recover it. 00:22:22.730 [2024-05-15 01:09:34.951104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.951281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.951308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.730 qpair failed and we were unable to recover it. 00:22:22.730 [2024-05-15 01:09:34.951513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.951727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.951755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.730 qpair failed and we were unable to recover it. 00:22:22.730 [2024-05-15 01:09:34.951924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.952145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.952174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.730 qpair failed and we were unable to recover it. 00:22:22.730 [2024-05-15 01:09:34.952364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.952580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.952607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.730 qpair failed and we were unable to recover it. 00:22:22.730 [2024-05-15 01:09:34.952813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.730 [2024-05-15 01:09:34.953025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.953053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.731 qpair failed and we were unable to recover it. 00:22:22.731 [2024-05-15 01:09:34.953269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.953501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.953529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.731 qpair failed and we were unable to recover it. 00:22:22.731 [2024-05-15 01:09:34.953733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.953911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.953945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.731 qpair failed and we were unable to recover it. 00:22:22.731 [2024-05-15 01:09:34.954156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.954314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.954339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.731 qpair failed and we were unable to recover it. 00:22:22.731 [2024-05-15 01:09:34.954534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.954710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.954737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.731 qpair failed and we were unable to recover it. 00:22:22.731 [2024-05-15 01:09:34.954951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.955137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.955162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.731 qpair failed and we were unable to recover it. 00:22:22.731 [2024-05-15 01:09:34.955348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.955587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.955615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.731 qpair failed and we were unable to recover it. 00:22:22.731 [2024-05-15 01:09:34.955820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.955994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.956022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.731 qpair failed and we were unable to recover it. 00:22:22.731 [2024-05-15 01:09:34.956197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.956411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.956435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.731 qpair failed and we were unable to recover it. 00:22:22.731 [2024-05-15 01:09:34.956630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.956796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.956820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.731 qpair failed and we were unable to recover it. 00:22:22.731 [2024-05-15 01:09:34.957012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.957192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.957219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.731 qpair failed and we were unable to recover it. 00:22:22.731 [2024-05-15 01:09:34.957430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.957792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.957845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.731 qpair failed and we were unable to recover it. 00:22:22.731 [2024-05-15 01:09:34.958063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.958275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.958303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.731 qpair failed and we were unable to recover it. 00:22:22.731 [2024-05-15 01:09:34.958512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.958719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.958746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.731 qpair failed and we were unable to recover it. 00:22:22.731 [2024-05-15 01:09:34.958939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.959123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.959149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.731 qpair failed and we were unable to recover it. 00:22:22.731 [2024-05-15 01:09:34.959383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.959668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.959695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.731 qpair failed and we were unable to recover it. 00:22:22.731 [2024-05-15 01:09:34.959879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.960100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.960128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.731 qpair failed and we were unable to recover it. 00:22:22.731 [2024-05-15 01:09:34.960336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.960656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.960704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.731 qpair failed and we were unable to recover it. 00:22:22.731 [2024-05-15 01:09:34.960918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.961135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.961178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.731 qpair failed and we were unable to recover it. 00:22:22.731 [2024-05-15 01:09:34.961416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.961613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.961637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.731 qpair failed and we were unable to recover it. 00:22:22.731 [2024-05-15 01:09:34.961820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.962068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.962094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.731 qpair failed and we were unable to recover it. 00:22:22.731 [2024-05-15 01:09:34.962308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.962649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.962709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.731 qpair failed and we were unable to recover it. 00:22:22.731 [2024-05-15 01:09:34.962948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.963149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.963174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.731 qpair failed and we were unable to recover it. 00:22:22.731 [2024-05-15 01:09:34.963370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.963604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.963632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.731 qpair failed and we were unable to recover it. 00:22:22.731 [2024-05-15 01:09:34.963838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.964027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.964054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.731 qpair failed and we were unable to recover it. 00:22:22.731 [2024-05-15 01:09:34.964270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.964502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.964526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.731 qpair failed and we were unable to recover it. 00:22:22.731 [2024-05-15 01:09:34.964780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.965021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.965047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.731 qpair failed and we were unable to recover it. 00:22:22.731 [2024-05-15 01:09:34.965259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.965474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.965502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.731 qpair failed and we were unable to recover it. 00:22:22.731 [2024-05-15 01:09:34.965820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.966061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.731 [2024-05-15 01:09:34.966087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.731 qpair failed and we were unable to recover it. 00:22:22.732 [2024-05-15 01:09:34.966277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.966436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.966475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.732 qpair failed and we were unable to recover it. 00:22:22.732 [2024-05-15 01:09:34.966716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.966920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.966968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.732 qpair failed and we were unable to recover it. 00:22:22.732 [2024-05-15 01:09:34.967169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.967337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.967361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.732 qpair failed and we were unable to recover it. 00:22:22.732 [2024-05-15 01:09:34.967553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.967717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.967743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.732 qpair failed and we were unable to recover it. 00:22:22.732 [2024-05-15 01:09:34.967958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.968147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.968176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.732 qpair failed and we were unable to recover it. 00:22:22.732 [2024-05-15 01:09:34.968414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.968583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.968609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.732 qpair failed and we were unable to recover it. 00:22:22.732 [2024-05-15 01:09:34.968815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.969036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.969061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.732 qpair failed and we were unable to recover it. 00:22:22.732 [2024-05-15 01:09:34.969277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.969497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.969543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.732 qpair failed and we were unable to recover it. 00:22:22.732 [2024-05-15 01:09:34.969724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.969901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.969938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.732 qpair failed and we were unable to recover it. 00:22:22.732 [2024-05-15 01:09:34.970150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.970355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.970384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.732 qpair failed and we were unable to recover it. 00:22:22.732 [2024-05-15 01:09:34.970592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.970774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.970803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.732 qpair failed and we were unable to recover it. 00:22:22.732 [2024-05-15 01:09:34.971003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.971202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.971226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.732 qpair failed and we were unable to recover it. 00:22:22.732 [2024-05-15 01:09:34.971416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.971715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.971766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.732 qpair failed and we were unable to recover it. 00:22:22.732 [2024-05-15 01:09:34.971974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.972181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.972210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.732 qpair failed and we were unable to recover it. 00:22:22.732 [2024-05-15 01:09:34.972423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.972646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.972673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.732 qpair failed and we were unable to recover it. 00:22:22.732 [2024-05-15 01:09:34.972883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.973063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.973093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.732 qpair failed and we were unable to recover it. 00:22:22.732 [2024-05-15 01:09:34.973329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.973669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.973729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.732 qpair failed and we were unable to recover it. 00:22:22.732 [2024-05-15 01:09:34.973941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.974128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.974157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.732 qpair failed and we were unable to recover it. 00:22:22.732 [2024-05-15 01:09:34.974377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.974649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.974699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.732 qpair failed and we were unable to recover it. 00:22:22.732 [2024-05-15 01:09:34.974905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.975126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.975152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.732 qpair failed and we were unable to recover it. 00:22:22.732 [2024-05-15 01:09:34.975404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.975617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.975645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.732 qpair failed and we were unable to recover it. 00:22:22.732 [2024-05-15 01:09:34.975890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.976120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.976146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.732 qpair failed and we were unable to recover it. 00:22:22.732 [2024-05-15 01:09:34.976338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.976552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.976579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.732 qpair failed and we were unable to recover it. 00:22:22.732 [2024-05-15 01:09:34.976803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.977017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.732 [2024-05-15 01:09:34.977043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.732 qpair failed and we were unable to recover it. 00:22:22.732 [2024-05-15 01:09:34.977261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.977565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.977615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.733 qpair failed and we were unable to recover it. 00:22:22.733 [2024-05-15 01:09:34.977860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.978085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.978110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.733 qpair failed and we were unable to recover it. 00:22:22.733 [2024-05-15 01:09:34.978327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.978563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.978607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.733 qpair failed and we were unable to recover it. 00:22:22.733 [2024-05-15 01:09:34.978839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.979027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.979053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.733 qpair failed and we were unable to recover it. 00:22:22.733 [2024-05-15 01:09:34.979213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.979578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.979628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.733 qpair failed and we were unable to recover it. 00:22:22.733 [2024-05-15 01:09:34.979838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.980025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.980050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.733 qpair failed and we were unable to recover it. 00:22:22.733 [2024-05-15 01:09:34.980255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.980627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.980671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.733 qpair failed and we were unable to recover it. 00:22:22.733 [2024-05-15 01:09:34.980880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.981094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.981118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.733 qpair failed and we were unable to recover it. 00:22:22.733 [2024-05-15 01:09:34.981367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.981677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.981728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.733 qpair failed and we were unable to recover it. 00:22:22.733 [2024-05-15 01:09:34.981958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.982195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.982219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.733 qpair failed and we were unable to recover it. 00:22:22.733 [2024-05-15 01:09:34.982434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.982745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.982805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.733 qpair failed and we were unable to recover it. 00:22:22.733 [2024-05-15 01:09:34.983031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.983200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.983225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.733 qpair failed and we were unable to recover it. 00:22:22.733 [2024-05-15 01:09:34.983434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.983689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.983740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.733 qpair failed and we were unable to recover it. 00:22:22.733 [2024-05-15 01:09:34.983924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.984170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.984197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.733 qpair failed and we were unable to recover it. 00:22:22.733 [2024-05-15 01:09:34.984445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.984628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.984654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.733 qpair failed and we were unable to recover it. 00:22:22.733 [2024-05-15 01:09:34.984886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.985138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.985164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.733 qpair failed and we were unable to recover it. 00:22:22.733 [2024-05-15 01:09:34.985381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.985599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.985628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.733 qpair failed and we were unable to recover it. 00:22:22.733 [2024-05-15 01:09:34.985809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.986046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.986075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.733 qpair failed and we were unable to recover it. 00:22:22.733 [2024-05-15 01:09:34.986296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.986491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.986516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.733 qpair failed and we were unable to recover it. 00:22:22.733 [2024-05-15 01:09:34.986700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.986920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.986954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.733 qpair failed and we were unable to recover it. 00:22:22.733 [2024-05-15 01:09:34.987142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.987379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.987407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.733 qpair failed and we were unable to recover it. 00:22:22.733 [2024-05-15 01:09:34.987586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.987764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.987791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.733 qpair failed and we were unable to recover it. 00:22:22.733 [2024-05-15 01:09:34.988031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.988195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.988219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.733 qpair failed and we were unable to recover it. 00:22:22.733 [2024-05-15 01:09:34.988433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.988676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.988700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.733 qpair failed and we were unable to recover it. 00:22:22.733 [2024-05-15 01:09:34.988895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.989129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.989155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.733 qpair failed and we were unable to recover it. 00:22:22.733 [2024-05-15 01:09:34.989313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.989529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.989554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.733 qpair failed and we were unable to recover it. 00:22:22.733 [2024-05-15 01:09:34.989993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.990194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.990239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.733 qpair failed and we were unable to recover it. 00:22:22.733 [2024-05-15 01:09:34.990423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.990620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.733 [2024-05-15 01:09:34.990647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.733 qpair failed and we were unable to recover it. 00:22:22.733 [2024-05-15 01:09:34.990911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.991153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.991178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.734 qpair failed and we were unable to recover it. 00:22:22.734 [2024-05-15 01:09:34.991466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.991776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.991836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.734 qpair failed and we were unable to recover it. 00:22:22.734 [2024-05-15 01:09:34.992105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.992365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.992393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.734 qpair failed and we were unable to recover it. 00:22:22.734 [2024-05-15 01:09:34.992605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.992895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.992954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.734 qpair failed and we were unable to recover it. 00:22:22.734 [2024-05-15 01:09:34.993209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.993613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.993658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.734 qpair failed and we were unable to recover it. 00:22:22.734 [2024-05-15 01:09:34.993898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.994102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.994131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.734 qpair failed and we were unable to recover it. 00:22:22.734 [2024-05-15 01:09:34.994351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.994594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.994621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.734 qpair failed and we were unable to recover it. 00:22:22.734 [2024-05-15 01:09:34.994832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.995040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.995069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.734 qpair failed and we were unable to recover it. 00:22:22.734 [2024-05-15 01:09:34.995306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.995491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.995523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.734 qpair failed and we were unable to recover it. 00:22:22.734 [2024-05-15 01:09:34.995759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.995971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.996000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.734 qpair failed and we were unable to recover it. 00:22:22.734 [2024-05-15 01:09:34.996185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.996403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.996429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.734 qpair failed and we were unable to recover it. 00:22:22.734 [2024-05-15 01:09:34.996713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.996946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.996989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.734 qpair failed and we were unable to recover it. 00:22:22.734 [2024-05-15 01:09:34.997182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.997519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.997579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.734 qpair failed and we were unable to recover it. 00:22:22.734 [2024-05-15 01:09:34.997787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.997986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.998012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.734 qpair failed and we were unable to recover it. 00:22:22.734 [2024-05-15 01:09:34.998205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.998367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.998391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.734 qpair failed and we were unable to recover it. 00:22:22.734 [2024-05-15 01:09:34.998579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.998745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.998769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.734 qpair failed and we were unable to recover it. 00:22:22.734 [2024-05-15 01:09:34.999005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.999303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.999353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.734 qpair failed and we were unable to recover it. 00:22:22.734 [2024-05-15 01:09:34.999574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.999856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:34.999909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.734 qpair failed and we were unable to recover it. 00:22:22.734 [2024-05-15 01:09:35.000114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:35.000332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:35.000381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.734 qpair failed and we were unable to recover it. 00:22:22.734 [2024-05-15 01:09:35.000592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:35.000818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:35.000845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.734 qpair failed and we were unable to recover it. 00:22:22.734 [2024-05-15 01:09:35.001030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:35.001235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:35.001263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.734 qpair failed and we were unable to recover it. 00:22:22.734 [2024-05-15 01:09:35.001472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:35.001654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:35.001680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.734 qpair failed and we were unable to recover it. 00:22:22.734 [2024-05-15 01:09:35.001866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:35.002051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:35.002076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.734 qpair failed and we were unable to recover it. 00:22:22.734 [2024-05-15 01:09:35.002307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:35.002686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:35.002735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.734 qpair failed and we were unable to recover it. 00:22:22.734 [2024-05-15 01:09:35.002959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:35.003234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:35.003284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.734 qpair failed and we were unable to recover it. 00:22:22.734 [2024-05-15 01:09:35.003492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:35.003705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:35.003730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.734 qpair failed and we were unable to recover it. 00:22:22.734 [2024-05-15 01:09:35.003950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:35.004185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:35.004225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.734 qpair failed and we were unable to recover it. 00:22:22.734 [2024-05-15 01:09:35.004487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:35.004793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:35.004843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.734 qpair failed and we were unable to recover it. 00:22:22.734 [2024-05-15 01:09:35.005094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:35.005312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.734 [2024-05-15 01:09:35.005347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.734 qpair failed and we were unable to recover it. 00:22:22.735 [2024-05-15 01:09:35.005549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.005755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.005783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.735 qpair failed and we were unable to recover it. 00:22:22.735 [2024-05-15 01:09:35.005993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.006185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.006210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.735 qpair failed and we were unable to recover it. 00:22:22.735 [2024-05-15 01:09:35.006605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.006847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.006874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.735 qpair failed and we were unable to recover it. 00:22:22.735 [2024-05-15 01:09:35.007083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.007272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.007299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.735 qpair failed and we were unable to recover it. 00:22:22.735 [2024-05-15 01:09:35.007505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.007735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.007760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.735 qpair failed and we were unable to recover it. 00:22:22.735 [2024-05-15 01:09:35.007953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.008193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.008250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.735 qpair failed and we were unable to recover it. 00:22:22.735 [2024-05-15 01:09:35.008550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.008766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.008790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.735 qpair failed and we were unable to recover it. 00:22:22.735 [2024-05-15 01:09:35.009027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.009437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.009486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.735 qpair failed and we were unable to recover it. 00:22:22.735 [2024-05-15 01:09:35.009699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.009944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.009972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.735 qpair failed and we were unable to recover it. 00:22:22.735 [2024-05-15 01:09:35.010191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.010475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.010529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.735 qpair failed and we were unable to recover it. 00:22:22.735 [2024-05-15 01:09:35.010924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.011169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.011197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.735 qpair failed and we were unable to recover it. 00:22:22.735 [2024-05-15 01:09:35.011418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.011601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.011630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.735 qpair failed and we were unable to recover it. 00:22:22.735 [2024-05-15 01:09:35.011835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.012036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.012062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.735 qpair failed and we were unable to recover it. 00:22:22.735 [2024-05-15 01:09:35.012230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.012411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.012440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.735 qpair failed and we were unable to recover it. 00:22:22.735 [2024-05-15 01:09:35.012650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.012830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.012858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.735 qpair failed and we were unable to recover it. 00:22:22.735 [2024-05-15 01:09:35.013050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.013299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.013323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.735 qpair failed and we were unable to recover it. 00:22:22.735 [2024-05-15 01:09:35.013518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.013744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.013773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.735 qpair failed and we were unable to recover it. 00:22:22.735 [2024-05-15 01:09:35.013998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.014194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.014219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.735 qpair failed and we were unable to recover it. 00:22:22.735 [2024-05-15 01:09:35.014389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.014590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.014617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.735 qpair failed and we were unable to recover it. 00:22:22.735 [2024-05-15 01:09:35.014827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.015028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.015052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.735 qpair failed and we were unable to recover it. 00:22:22.735 [2024-05-15 01:09:35.015255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.015547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.015575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.735 qpair failed and we were unable to recover it. 00:22:22.735 [2024-05-15 01:09:35.015751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.015997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.016023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.735 qpair failed and we were unable to recover it. 00:22:22.735 [2024-05-15 01:09:35.016175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.016397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.016425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.735 qpair failed and we were unable to recover it. 00:22:22.735 [2024-05-15 01:09:35.016609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.016813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.016840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.735 qpair failed and we were unable to recover it. 00:22:22.735 [2024-05-15 01:09:35.017078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.017329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.017357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.735 qpair failed and we were unable to recover it. 00:22:22.735 [2024-05-15 01:09:35.017579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.017790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.017818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.735 qpair failed and we were unable to recover it. 00:22:22.735 [2024-05-15 01:09:35.018011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.018189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.018230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.735 qpair failed and we were unable to recover it. 00:22:22.735 [2024-05-15 01:09:35.018446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.018691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.018731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.735 qpair failed and we were unable to recover it. 00:22:22.735 [2024-05-15 01:09:35.018917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.019173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.735 [2024-05-15 01:09:35.019198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.736 qpair failed and we were unable to recover it. 00:22:22.736 [2024-05-15 01:09:35.019430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.019641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.019668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.736 qpair failed and we were unable to recover it. 00:22:22.736 [2024-05-15 01:09:35.019880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.020094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.020120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.736 qpair failed and we were unable to recover it. 00:22:22.736 [2024-05-15 01:09:35.020287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.020494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.020519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.736 qpair failed and we were unable to recover it. 00:22:22.736 [2024-05-15 01:09:35.020777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.020983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.021013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.736 qpair failed and we were unable to recover it. 00:22:22.736 [2024-05-15 01:09:35.021213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.021415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.021445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.736 qpair failed and we were unable to recover it. 00:22:22.736 [2024-05-15 01:09:35.021649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.021878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.021902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.736 qpair failed and we were unable to recover it. 00:22:22.736 [2024-05-15 01:09:35.022090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.022308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.022333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.736 qpair failed and we were unable to recover it. 00:22:22.736 [2024-05-15 01:09:35.022548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.022758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.022788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.736 qpair failed and we were unable to recover it. 00:22:22.736 [2024-05-15 01:09:35.022970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.023220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.023247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.736 qpair failed and we were unable to recover it. 00:22:22.736 [2024-05-15 01:09:35.023533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.023762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.023789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.736 qpair failed and we were unable to recover it. 00:22:22.736 [2024-05-15 01:09:35.024007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.024194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.024218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.736 qpair failed and we were unable to recover it. 00:22:22.736 [2024-05-15 01:09:35.024404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.024654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.024681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.736 qpair failed and we were unable to recover it. 00:22:22.736 [2024-05-15 01:09:35.024916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.025150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.025175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.736 qpair failed and we were unable to recover it. 00:22:22.736 [2024-05-15 01:09:35.025524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.025737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.025765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.736 qpair failed and we were unable to recover it. 00:22:22.736 [2024-05-15 01:09:35.025945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.026189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.026214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.736 qpair failed and we were unable to recover it. 00:22:22.736 [2024-05-15 01:09:35.026457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.026636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.026660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.736 qpair failed and we were unable to recover it. 00:22:22.736 [2024-05-15 01:09:35.026873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.027108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.027134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.736 qpair failed and we were unable to recover it. 00:22:22.736 [2024-05-15 01:09:35.027355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.027588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.027615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.736 qpair failed and we were unable to recover it. 00:22:22.736 [2024-05-15 01:09:35.027859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.028051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.028077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.736 qpair failed and we were unable to recover it. 00:22:22.736 [2024-05-15 01:09:35.028297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.028562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.028589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.736 qpair failed and we were unable to recover it. 00:22:22.736 [2024-05-15 01:09:35.028861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.029104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.029130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.736 qpair failed and we were unable to recover it. 00:22:22.736 [2024-05-15 01:09:35.029331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.029534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.029561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.736 qpair failed and we were unable to recover it. 00:22:22.736 [2024-05-15 01:09:35.029768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.029965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.029993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.736 qpair failed and we were unable to recover it. 00:22:22.736 [2024-05-15 01:09:35.030206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.736 [2024-05-15 01:09:35.030446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.030473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.737 qpair failed and we were unable to recover it. 00:22:22.737 [2024-05-15 01:09:35.030710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.030924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.030977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.737 qpair failed and we were unable to recover it. 00:22:22.737 [2024-05-15 01:09:35.031200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.031424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.031451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.737 qpair failed and we were unable to recover it. 00:22:22.737 [2024-05-15 01:09:35.031662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.031879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.031904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.737 qpair failed and we were unable to recover it. 00:22:22.737 [2024-05-15 01:09:35.032077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.032321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.032363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.737 qpair failed and we were unable to recover it. 00:22:22.737 [2024-05-15 01:09:35.032594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.032800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.032827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.737 qpair failed and we were unable to recover it. 00:22:22.737 [2024-05-15 01:09:35.033015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.033205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.033246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.737 qpair failed and we were unable to recover it. 00:22:22.737 [2024-05-15 01:09:35.033466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.033676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.033704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.737 qpair failed and we were unable to recover it. 00:22:22.737 [2024-05-15 01:09:35.033917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.034110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.034139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.737 qpair failed and we were unable to recover it. 00:22:22.737 [2024-05-15 01:09:35.034354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.034600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.034629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.737 qpair failed and we were unable to recover it. 00:22:22.737 [2024-05-15 01:09:35.034819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.035019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.035045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.737 qpair failed and we were unable to recover it. 00:22:22.737 [2024-05-15 01:09:35.035273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.035500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.035525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.737 qpair failed and we were unable to recover it. 00:22:22.737 [2024-05-15 01:09:35.035820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.036030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.036055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.737 qpair failed and we were unable to recover it. 00:22:22.737 [2024-05-15 01:09:35.036245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.036440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.036467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.737 qpair failed and we were unable to recover it. 00:22:22.737 [2024-05-15 01:09:35.036685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.036896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.036923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.737 qpair failed and we were unable to recover it. 00:22:22.737 [2024-05-15 01:09:35.037123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.037332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.037361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.737 qpair failed and we were unable to recover it. 00:22:22.737 [2024-05-15 01:09:35.037595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.037759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.037784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.737 qpair failed and we were unable to recover it. 00:22:22.737 [2024-05-15 01:09:35.038036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.038216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.038244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.737 qpair failed and we were unable to recover it. 00:22:22.737 [2024-05-15 01:09:35.038429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.038610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.038637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.737 qpair failed and we were unable to recover it. 00:22:22.737 [2024-05-15 01:09:35.038899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.039114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.039139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.737 qpair failed and we were unable to recover it. 00:22:22.737 [2024-05-15 01:09:35.039347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.039549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.039576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.737 qpair failed and we were unable to recover it. 00:22:22.737 [2024-05-15 01:09:35.039804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.040025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.040051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.737 qpair failed and we were unable to recover it. 00:22:22.737 [2024-05-15 01:09:35.040219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.040401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.040425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.737 qpair failed and we were unable to recover it. 00:22:22.737 [2024-05-15 01:09:35.040648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.040883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.040910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.737 qpair failed and we were unable to recover it. 00:22:22.737 [2024-05-15 01:09:35.041096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.041304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.041332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.737 qpair failed and we were unable to recover it. 00:22:22.737 [2024-05-15 01:09:35.041569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.041778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.041805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.737 qpair failed and we were unable to recover it. 00:22:22.737 [2024-05-15 01:09:35.042144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.042448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.042475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.737 qpair failed and we were unable to recover it. 00:22:22.737 [2024-05-15 01:09:35.042680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.042848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.042876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.737 qpair failed and we were unable to recover it. 00:22:22.737 [2024-05-15 01:09:35.043096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.043309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.737 [2024-05-15 01:09:35.043336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.738 qpair failed and we were unable to recover it. 00:22:22.738 [2024-05-15 01:09:35.043522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.043744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.043771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.738 qpair failed and we were unable to recover it. 00:22:22.738 [2024-05-15 01:09:35.043989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.044160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.044185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.738 qpair failed and we were unable to recover it. 00:22:22.738 [2024-05-15 01:09:35.044356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.044568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.044596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.738 qpair failed and we were unable to recover it. 00:22:22.738 [2024-05-15 01:09:35.044825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.045045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.045070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.738 qpair failed and we were unable to recover it. 00:22:22.738 [2024-05-15 01:09:35.045238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.045430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.045455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.738 qpair failed and we were unable to recover it. 00:22:22.738 [2024-05-15 01:09:35.045681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.045896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.045924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.738 qpair failed and we were unable to recover it. 00:22:22.738 [2024-05-15 01:09:35.046136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.046353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.046378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.738 qpair failed and we were unable to recover it. 00:22:22.738 [2024-05-15 01:09:35.046570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.046807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.046835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.738 qpair failed and we were unable to recover it. 00:22:22.738 [2024-05-15 01:09:35.047069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.047277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.047304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.738 qpair failed and we were unable to recover it. 00:22:22.738 [2024-05-15 01:09:35.047523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.047768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.047795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.738 qpair failed and we were unable to recover it. 00:22:22.738 [2024-05-15 01:09:35.048014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.048227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.048251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.738 qpair failed and we were unable to recover it. 00:22:22.738 [2024-05-15 01:09:35.048461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.048670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.048697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.738 qpair failed and we were unable to recover it. 00:22:22.738 [2024-05-15 01:09:35.048902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.049117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.049143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.738 qpair failed and we were unable to recover it. 00:22:22.738 [2024-05-15 01:09:35.049415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.049612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.049640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.738 qpair failed and we were unable to recover it. 00:22:22.738 [2024-05-15 01:09:35.049855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.050083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.050112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.738 qpair failed and we were unable to recover it. 00:22:22.738 [2024-05-15 01:09:35.050352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.050536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.050563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.738 qpair failed and we were unable to recover it. 00:22:22.738 [2024-05-15 01:09:35.050798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.051042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.051075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.738 qpair failed and we were unable to recover it. 00:22:22.738 [2024-05-15 01:09:35.051258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.051541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.051568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.738 qpair failed and we were unable to recover it. 00:22:22.738 [2024-05-15 01:09:35.051805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.052019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.052044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.738 qpair failed and we were unable to recover it. 00:22:22.738 [2024-05-15 01:09:35.052231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.052410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.052439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.738 qpair failed and we were unable to recover it. 00:22:22.738 [2024-05-15 01:09:35.052665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.052873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.052901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.738 qpair failed and we were unable to recover it. 00:22:22.738 [2024-05-15 01:09:35.053087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.053287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.053315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.738 qpair failed and we were unable to recover it. 00:22:22.738 [2024-05-15 01:09:35.053504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.053816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.053844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.738 qpair failed and we were unable to recover it. 00:22:22.738 [2024-05-15 01:09:35.054080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.054261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.054290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.738 qpair failed and we were unable to recover it. 00:22:22.738 [2024-05-15 01:09:35.054507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.054745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.054772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.738 qpair failed and we were unable to recover it. 00:22:22.738 [2024-05-15 01:09:35.054955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.055331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.055378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.738 qpair failed and we were unable to recover it. 00:22:22.738 [2024-05-15 01:09:35.055626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.055856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.055883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.738 qpair failed and we were unable to recover it. 00:22:22.738 [2024-05-15 01:09:35.056128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.056357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.738 [2024-05-15 01:09:35.056381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.738 qpair failed and we were unable to recover it. 00:22:22.738 [2024-05-15 01:09:35.056752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.057016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.057042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.739 qpair failed and we were unable to recover it. 00:22:22.739 [2024-05-15 01:09:35.057251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.057461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.057489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.739 qpair failed and we were unable to recover it. 00:22:22.739 [2024-05-15 01:09:35.057696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.057902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.057936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.739 qpair failed and we were unable to recover it. 00:22:22.739 [2024-05-15 01:09:35.058144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.058343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.058370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.739 qpair failed and we were unable to recover it. 00:22:22.739 [2024-05-15 01:09:35.058590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.058796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.058824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.739 qpair failed and we were unable to recover it. 00:22:22.739 [2024-05-15 01:09:35.059042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.059253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.059281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.739 qpair failed and we were unable to recover it. 00:22:22.739 [2024-05-15 01:09:35.059493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.059705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.059732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.739 qpair failed and we were unable to recover it. 00:22:22.739 [2024-05-15 01:09:35.059941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.060174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.060201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.739 qpair failed and we were unable to recover it. 00:22:22.739 [2024-05-15 01:09:35.060420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.060690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.060717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.739 qpair failed and we were unable to recover it. 00:22:22.739 [2024-05-15 01:09:35.060939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.061153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.061178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.739 qpair failed and we were unable to recover it. 00:22:22.739 [2024-05-15 01:09:35.061398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.061610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.061637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.739 qpair failed and we were unable to recover it. 00:22:22.739 [2024-05-15 01:09:35.061871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.062084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.062109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.739 qpair failed and we were unable to recover it. 00:22:22.739 [2024-05-15 01:09:35.062277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.062463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.062488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.739 qpair failed and we were unable to recover it. 00:22:22.739 [2024-05-15 01:09:35.062680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.062957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.062999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.739 qpair failed and we were unable to recover it. 00:22:22.739 [2024-05-15 01:09:35.063236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.063441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.063468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.739 qpair failed and we were unable to recover it. 00:22:22.739 [2024-05-15 01:09:35.063653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.063890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.063917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.739 qpair failed and we were unable to recover it. 00:22:22.739 [2024-05-15 01:09:35.064136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.064363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.064390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.739 qpair failed and we were unable to recover it. 00:22:22.739 [2024-05-15 01:09:35.064607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.064815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.064844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.739 qpair failed and we were unable to recover it. 00:22:22.739 [2024-05-15 01:09:35.065077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.065270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.065295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.739 qpair failed and we were unable to recover it. 00:22:22.739 [2024-05-15 01:09:35.065480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.065697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.065722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.739 qpair failed and we were unable to recover it. 00:22:22.739 [2024-05-15 01:09:35.065951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.066287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.066348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.739 qpair failed and we were unable to recover it. 00:22:22.739 [2024-05-15 01:09:35.066596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.066805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.066838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.739 qpair failed and we were unable to recover it. 00:22:22.739 [2024-05-15 01:09:35.067051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.067273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.067301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.739 qpair failed and we were unable to recover it. 00:22:22.739 [2024-05-15 01:09:35.067511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.067715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.067744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.739 qpair failed and we were unable to recover it. 00:22:22.739 [2024-05-15 01:09:35.067948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.068159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.068185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.739 qpair failed and we were unable to recover it. 00:22:22.739 [2024-05-15 01:09:35.068434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.068668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.068711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.739 qpair failed and we were unable to recover it. 00:22:22.739 [2024-05-15 01:09:35.068937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.069147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.069172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.739 qpair failed and we were unable to recover it. 00:22:22.739 [2024-05-15 01:09:35.069383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.069586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.739 [2024-05-15 01:09:35.069613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.739 qpair failed and we were unable to recover it. 00:22:22.739 [2024-05-15 01:09:35.069811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.070036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.070065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.740 qpair failed and we were unable to recover it. 00:22:22.740 [2024-05-15 01:09:35.070336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.070535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.070562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.740 qpair failed and we were unable to recover it. 00:22:22.740 [2024-05-15 01:09:35.070758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.070921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.070954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.740 qpair failed and we were unable to recover it. 00:22:22.740 [2024-05-15 01:09:35.071178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.071387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.071419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.740 qpair failed and we were unable to recover it. 00:22:22.740 [2024-05-15 01:09:35.071613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.071820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.071849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.740 qpair failed and we were unable to recover it. 00:22:22.740 [2024-05-15 01:09:35.072060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.072262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.072289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.740 qpair failed and we were unable to recover it. 00:22:22.740 [2024-05-15 01:09:35.072483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.072692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.072719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.740 qpair failed and we were unable to recover it. 00:22:22.740 [2024-05-15 01:09:35.072980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.073149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.073176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.740 qpair failed and we were unable to recover it. 00:22:22.740 [2024-05-15 01:09:35.073386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.073795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.073845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.740 qpair failed and we were unable to recover it. 00:22:22.740 [2024-05-15 01:09:35.074173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.074552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.074607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.740 qpair failed and we were unable to recover it. 00:22:22.740 [2024-05-15 01:09:35.074863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.075049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.075079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.740 qpair failed and we were unable to recover it. 00:22:22.740 [2024-05-15 01:09:35.075315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.075524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.075551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.740 qpair failed and we were unable to recover it. 00:22:22.740 [2024-05-15 01:09:35.075753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.075979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.076004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.740 qpair failed and we were unable to recover it. 00:22:22.740 [2024-05-15 01:09:35.076194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.076505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.076565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.740 qpair failed and we were unable to recover it. 00:22:22.740 [2024-05-15 01:09:35.076775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.077019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.077044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.740 qpair failed and we were unable to recover it. 00:22:22.740 [2024-05-15 01:09:35.077208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.077398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.077426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.740 qpair failed and we were unable to recover it. 00:22:22.740 [2024-05-15 01:09:35.077639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.077821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.077849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.740 qpair failed and we were unable to recover it. 00:22:22.740 [2024-05-15 01:09:35.078065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.078239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.078267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.740 qpair failed and we were unable to recover it. 00:22:22.740 [2024-05-15 01:09:35.078506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.078694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.078722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.740 qpair failed and we were unable to recover it. 00:22:22.740 [2024-05-15 01:09:35.078937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.079150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.079180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.740 qpair failed and we were unable to recover it. 00:22:22.740 [2024-05-15 01:09:35.079395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.079638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.079666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.740 qpair failed and we were unable to recover it. 00:22:22.740 [2024-05-15 01:09:35.079846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.080083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.080108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.740 qpair failed and we were unable to recover it. 00:22:22.740 [2024-05-15 01:09:35.080270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.080471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.080494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.740 qpair failed and we were unable to recover it. 00:22:22.740 [2024-05-15 01:09:35.080720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.080935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.080978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.740 qpair failed and we were unable to recover it. 00:22:22.740 [2024-05-15 01:09:35.081265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.081472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.081497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.740 qpair failed and we were unable to recover it. 00:22:22.740 [2024-05-15 01:09:35.081685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.081860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.081888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.740 qpair failed and we were unable to recover it. 00:22:22.740 [2024-05-15 01:09:35.082117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.082301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.082329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.740 qpair failed and we were unable to recover it. 00:22:22.740 [2024-05-15 01:09:35.082562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.082807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.082831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.740 qpair failed and we were unable to recover it. 00:22:22.740 [2024-05-15 01:09:35.083092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.083327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.740 [2024-05-15 01:09:35.083387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.740 qpair failed and we were unable to recover it. 00:22:22.740 [2024-05-15 01:09:35.083818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.084125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.084150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.741 qpair failed and we were unable to recover it. 00:22:22.741 [2024-05-15 01:09:35.084407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.084636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.084664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.741 qpair failed and we were unable to recover it. 00:22:22.741 [2024-05-15 01:09:35.084873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.085097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.085124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.741 qpair failed and we were unable to recover it. 00:22:22.741 [2024-05-15 01:09:35.085328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.085515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.085542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.741 qpair failed and we were unable to recover it. 00:22:22.741 [2024-05-15 01:09:35.085758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.085986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.086012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.741 qpair failed and we were unable to recover it. 00:22:22.741 [2024-05-15 01:09:35.086256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.086524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.086575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.741 qpair failed and we were unable to recover it. 00:22:22.741 [2024-05-15 01:09:35.086815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.087023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.087051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.741 qpair failed and we were unable to recover it. 00:22:22.741 [2024-05-15 01:09:35.087244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.087460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.087488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.741 qpair failed and we were unable to recover it. 00:22:22.741 [2024-05-15 01:09:35.087773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.088020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.088059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.741 qpair failed and we were unable to recover it. 00:22:22.741 [2024-05-15 01:09:35.088263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.088497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.088524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.741 qpair failed and we were unable to recover it. 00:22:22.741 [2024-05-15 01:09:35.088704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.089003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.089029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.741 qpair failed and we were unable to recover it. 00:22:22.741 [2024-05-15 01:09:35.089242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.089405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.089430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.741 qpair failed and we were unable to recover it. 00:22:22.741 [2024-05-15 01:09:35.089620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.089809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.089848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.741 qpair failed and we were unable to recover it. 00:22:22.741 [2024-05-15 01:09:35.090073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.090297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.090322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.741 qpair failed and we were unable to recover it. 00:22:22.741 [2024-05-15 01:09:35.090603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.090783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.090814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.741 qpair failed and we were unable to recover it. 00:22:22.741 [2024-05-15 01:09:35.091093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.091302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.091331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.741 qpair failed and we were unable to recover it. 00:22:22.741 [2024-05-15 01:09:35.091598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.091837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.091864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.741 qpair failed and we were unable to recover it. 00:22:22.741 [2024-05-15 01:09:35.092056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.092303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.092331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.741 qpair failed and we were unable to recover it. 00:22:22.741 [2024-05-15 01:09:35.092623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.092904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.092938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.741 qpair failed and we were unable to recover it. 00:22:22.741 [2024-05-15 01:09:35.093161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.093356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.093382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.741 qpair failed and we were unable to recover it. 00:22:22.741 [2024-05-15 01:09:35.093617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.093850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.093878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.741 qpair failed and we were unable to recover it. 00:22:22.741 [2024-05-15 01:09:35.094122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.094388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.741 [2024-05-15 01:09:35.094430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:22.741 qpair failed and we were unable to recover it. 00:22:23.017 [2024-05-15 01:09:35.094637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.094824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.094851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.017 qpair failed and we were unable to recover it. 00:22:23.017 [2024-05-15 01:09:35.095059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.095252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.095277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.017 qpair failed and we were unable to recover it. 00:22:23.017 [2024-05-15 01:09:35.095501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.095737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.095762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.017 qpair failed and we were unable to recover it. 00:22:23.017 [2024-05-15 01:09:35.095991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.096217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.096242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.017 qpair failed and we were unable to recover it. 00:22:23.017 [2024-05-15 01:09:35.096481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.096688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.096715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.017 qpair failed and we were unable to recover it. 00:22:23.017 [2024-05-15 01:09:35.096922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.097140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.097165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.017 qpair failed and we were unable to recover it. 00:22:23.017 [2024-05-15 01:09:35.097373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.097547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.097573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.017 qpair failed and we were unable to recover it. 00:22:23.017 [2024-05-15 01:09:35.097750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.097910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.097966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.017 qpair failed and we were unable to recover it. 00:22:23.017 [2024-05-15 01:09:35.098144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.098342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.098370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.017 qpair failed and we were unable to recover it. 00:22:23.017 [2024-05-15 01:09:35.098557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.098768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.098796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.017 qpair failed and we were unable to recover it. 00:22:23.017 [2024-05-15 01:09:35.099016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.099203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.099228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.017 qpair failed and we were unable to recover it. 00:22:23.017 [2024-05-15 01:09:35.099451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.099683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.099711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.017 qpair failed and we were unable to recover it. 00:22:23.017 [2024-05-15 01:09:35.099972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.100127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.100152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.017 qpair failed and we were unable to recover it. 00:22:23.017 [2024-05-15 01:09:35.100373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.100808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.100866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.017 qpair failed and we were unable to recover it. 00:22:23.017 [2024-05-15 01:09:35.101085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.101254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.101279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.017 qpair failed and we were unable to recover it. 00:22:23.017 [2024-05-15 01:09:35.101459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.101702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.101729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.017 qpair failed and we were unable to recover it. 00:22:23.017 [2024-05-15 01:09:35.101946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.102120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.102148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.017 qpair failed and we were unable to recover it. 00:22:23.017 [2024-05-15 01:09:35.102352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.102629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.102657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.017 qpair failed and we were unable to recover it. 00:22:23.017 [2024-05-15 01:09:35.102873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.103057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.103083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.017 qpair failed and we were unable to recover it. 00:22:23.017 [2024-05-15 01:09:35.103326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.103601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.103626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.017 qpair failed and we were unable to recover it. 00:22:23.017 [2024-05-15 01:09:35.103861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.104099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.104128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.017 qpair failed and we were unable to recover it. 00:22:23.017 [2024-05-15 01:09:35.104335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.104582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.104609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.017 qpair failed and we were unable to recover it. 00:22:23.017 [2024-05-15 01:09:35.104826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.105011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.105036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.017 qpair failed and we were unable to recover it. 00:22:23.017 [2024-05-15 01:09:35.105281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.105563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.105591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.017 qpair failed and we were unable to recover it. 00:22:23.017 [2024-05-15 01:09:35.105809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.106021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.106050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.017 qpair failed and we were unable to recover it. 00:22:23.017 [2024-05-15 01:09:35.106238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.106408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.106433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.017 qpair failed and we were unable to recover it. 00:22:23.017 [2024-05-15 01:09:35.106709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.107000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.017 [2024-05-15 01:09:35.107026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.018 qpair failed and we were unable to recover it. 00:22:23.018 [2024-05-15 01:09:35.107226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.107436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.107464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.018 qpair failed and we were unable to recover it. 00:22:23.018 [2024-05-15 01:09:35.107669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.107923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.107956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.018 qpair failed and we were unable to recover it. 00:22:23.018 [2024-05-15 01:09:35.108159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.108397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.108425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.018 qpair failed and we were unable to recover it. 00:22:23.018 [2024-05-15 01:09:35.108801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.109049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.109075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.018 qpair failed and we were unable to recover it. 00:22:23.018 [2024-05-15 01:09:35.109265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.109449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.109476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.018 qpair failed and we were unable to recover it. 00:22:23.018 [2024-05-15 01:09:35.109686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.109921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.109955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.018 qpair failed and we were unable to recover it. 00:22:23.018 [2024-05-15 01:09:35.110188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.110428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.110456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.018 qpair failed and we were unable to recover it. 00:22:23.018 [2024-05-15 01:09:35.110727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.110953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.110983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.018 qpair failed and we were unable to recover it. 00:22:23.018 [2024-05-15 01:09:35.111191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.111522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.111574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.018 qpair failed and we were unable to recover it. 00:22:23.018 [2024-05-15 01:09:35.111793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.112036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.112061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.018 qpair failed and we were unable to recover it. 00:22:23.018 [2024-05-15 01:09:35.112283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.112445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.112470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.018 qpair failed and we were unable to recover it. 00:22:23.018 [2024-05-15 01:09:35.112687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.112936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.112980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.018 qpair failed and we were unable to recover it. 00:22:23.018 [2024-05-15 01:09:35.113167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.113392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.113417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.018 qpair failed and we were unable to recover it. 00:22:23.018 [2024-05-15 01:09:35.113607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.113780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.113809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.018 qpair failed and we were unable to recover it. 00:22:23.018 [2024-05-15 01:09:35.114000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.114189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.114229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.018 qpair failed and we were unable to recover it. 00:22:23.018 [2024-05-15 01:09:35.114435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.114673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.114701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.018 qpair failed and we were unable to recover it. 00:22:23.018 [2024-05-15 01:09:35.114922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.115146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.115175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.018 qpair failed and we were unable to recover it. 00:22:23.018 [2024-05-15 01:09:35.115383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.115590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.115617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.018 qpair failed and we were unable to recover it. 00:22:23.018 [2024-05-15 01:09:35.115875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.116111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.116137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.018 qpair failed and we were unable to recover it. 00:22:23.018 [2024-05-15 01:09:35.116354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.116559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.116586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.018 qpair failed and we were unable to recover it. 00:22:23.018 [2024-05-15 01:09:35.116799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.116987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.117012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.018 qpair failed and we were unable to recover it. 00:22:23.018 [2024-05-15 01:09:35.117204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.117417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.117442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.018 qpair failed and we were unable to recover it. 00:22:23.018 [2024-05-15 01:09:35.117832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.118070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.118097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.018 qpair failed and we were unable to recover it. 00:22:23.018 [2024-05-15 01:09:35.118320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.118532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.118560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.018 qpair failed and we were unable to recover it. 00:22:23.018 [2024-05-15 01:09:35.118775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.119016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.119044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.018 qpair failed and we were unable to recover it. 00:22:23.018 [2024-05-15 01:09:35.119286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.119495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.119525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.018 qpair failed and we were unable to recover it. 00:22:23.018 [2024-05-15 01:09:35.119804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.120085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.120114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.018 qpair failed and we were unable to recover it. 00:22:23.018 [2024-05-15 01:09:35.120334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.120568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.120596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.018 qpair failed and we were unable to recover it. 00:22:23.018 [2024-05-15 01:09:35.120778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.120948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.018 [2024-05-15 01:09:35.120976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.018 qpair failed and we were unable to recover it. 00:22:23.019 [2024-05-15 01:09:35.121162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.121368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.121397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.019 qpair failed and we were unable to recover it. 00:22:23.019 [2024-05-15 01:09:35.121608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.121821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.121849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.019 qpair failed and we were unable to recover it. 00:22:23.019 [2024-05-15 01:09:35.122046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.122256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.122284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.019 qpair failed and we were unable to recover it. 00:22:23.019 [2024-05-15 01:09:35.122496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.122706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.122735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.019 qpair failed and we were unable to recover it. 00:22:23.019 [2024-05-15 01:09:35.122952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.123144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.123169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.019 qpair failed and we were unable to recover it. 00:22:23.019 [2024-05-15 01:09:35.123407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.123602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.123626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.019 qpair failed and we were unable to recover it. 00:22:23.019 [2024-05-15 01:09:35.123832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.124069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.124097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.019 qpair failed and we were unable to recover it. 00:22:23.019 [2024-05-15 01:09:35.124284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.124528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.124557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.019 qpair failed and we were unable to recover it. 00:22:23.019 [2024-05-15 01:09:35.124757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.125001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.125029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.019 qpair failed and we were unable to recover it. 00:22:23.019 [2024-05-15 01:09:35.125238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.125525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.125590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.019 qpair failed and we were unable to recover it. 00:22:23.019 [2024-05-15 01:09:35.125824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.126067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.126093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.019 qpair failed and we were unable to recover it. 00:22:23.019 [2024-05-15 01:09:35.126260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.126571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.126624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.019 qpair failed and we were unable to recover it. 00:22:23.019 [2024-05-15 01:09:35.126837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.127053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.127078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.019 qpair failed and we were unable to recover it. 00:22:23.019 [2024-05-15 01:09:35.127267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.127528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.127555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.019 qpair failed and we were unable to recover it. 00:22:23.019 [2024-05-15 01:09:35.127857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.128109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.128138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.019 qpair failed and we were unable to recover it. 00:22:23.019 [2024-05-15 01:09:35.128346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.128575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.128598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.019 qpair failed and we were unable to recover it. 00:22:23.019 [2024-05-15 01:09:35.128791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.128997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.129026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.019 qpair failed and we were unable to recover it. 00:22:23.019 [2024-05-15 01:09:35.129243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.129410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.129436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.019 qpair failed and we were unable to recover it. 00:22:23.019 [2024-05-15 01:09:35.129656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.129944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.129987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.019 qpair failed and we were unable to recover it. 00:22:23.019 [2024-05-15 01:09:35.130149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.130329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.130357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.019 qpair failed and we were unable to recover it. 00:22:23.019 [2024-05-15 01:09:35.130571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.130753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.130792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.019 qpair failed and we were unable to recover it. 00:22:23.019 [2024-05-15 01:09:35.131099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.131362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.131390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.019 qpair failed and we were unable to recover it. 00:22:23.019 [2024-05-15 01:09:35.131598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.131841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.131869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.019 qpair failed and we were unable to recover it. 00:22:23.019 [2024-05-15 01:09:35.132097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.132284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.132309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.019 qpair failed and we were unable to recover it. 00:22:23.019 [2024-05-15 01:09:35.132497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.132688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.132728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.019 qpair failed and we were unable to recover it. 00:22:23.019 [2024-05-15 01:09:35.132947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.133151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.133175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.019 qpair failed and we were unable to recover it. 00:22:23.019 [2024-05-15 01:09:35.133341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.133528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.133557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.019 qpair failed and we were unable to recover it. 00:22:23.019 [2024-05-15 01:09:35.133794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.133992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.134017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.019 qpair failed and we were unable to recover it. 00:22:23.019 [2024-05-15 01:09:35.134188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.134439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.134466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.019 qpair failed and we were unable to recover it. 00:22:23.019 [2024-05-15 01:09:35.134674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.019 [2024-05-15 01:09:35.134887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.134914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.020 qpair failed and we were unable to recover it. 00:22:23.020 [2024-05-15 01:09:35.135174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.135458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.135482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.020 qpair failed and we were unable to recover it. 00:22:23.020 [2024-05-15 01:09:35.135693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.135908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.135949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.020 qpair failed and we were unable to recover it. 00:22:23.020 [2024-05-15 01:09:35.136192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.136376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.136403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.020 qpair failed and we were unable to recover it. 00:22:23.020 [2024-05-15 01:09:35.136589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.136773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.136800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.020 qpair failed and we were unable to recover it. 00:22:23.020 [2024-05-15 01:09:35.136988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.137226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.137254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.020 qpair failed and we were unable to recover it. 00:22:23.020 [2024-05-15 01:09:35.137435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.137646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.137673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.020 qpair failed and we were unable to recover it. 00:22:23.020 [2024-05-15 01:09:35.137873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.138092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.138117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.020 qpair failed and we were unable to recover it. 00:22:23.020 [2024-05-15 01:09:35.138293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.138512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.138544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.020 qpair failed and we were unable to recover it. 00:22:23.020 [2024-05-15 01:09:35.138730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.138941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.138984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.020 qpair failed and we were unable to recover it. 00:22:23.020 [2024-05-15 01:09:35.139145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.139364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.139391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.020 qpair failed and we were unable to recover it. 00:22:23.020 [2024-05-15 01:09:35.139656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.139862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.139889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.020 qpair failed and we were unable to recover it. 00:22:23.020 [2024-05-15 01:09:35.140116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.140340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.140370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.020 qpair failed and we were unable to recover it. 00:22:23.020 [2024-05-15 01:09:35.140742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.141004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.141033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.020 qpair failed and we were unable to recover it. 00:22:23.020 [2024-05-15 01:09:35.141245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.141452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.141477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.020 qpair failed and we were unable to recover it. 00:22:23.020 [2024-05-15 01:09:35.141700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.141982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.142011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.020 qpair failed and we were unable to recover it. 00:22:23.020 [2024-05-15 01:09:35.142306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.142657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.142709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.020 qpair failed and we were unable to recover it. 00:22:23.020 [2024-05-15 01:09:35.143000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.143241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.143269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.020 qpair failed and we were unable to recover it. 00:22:23.020 [2024-05-15 01:09:35.143507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.143836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.143896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.020 qpair failed and we were unable to recover it. 00:22:23.020 [2024-05-15 01:09:35.144160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.144384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.144413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.020 qpair failed and we were unable to recover it. 00:22:23.020 [2024-05-15 01:09:35.144621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.144791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.144818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.020 qpair failed and we were unable to recover it. 00:22:23.020 [2024-05-15 01:09:35.145016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.145228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.145256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.020 qpair failed and we were unable to recover it. 00:22:23.020 [2024-05-15 01:09:35.145450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.145786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.145835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.020 qpair failed and we were unable to recover it. 00:22:23.020 [2024-05-15 01:09:35.146077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.146280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.146308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.020 qpair failed and we were unable to recover it. 00:22:23.020 [2024-05-15 01:09:35.146520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.146736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.146764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.020 qpair failed and we were unable to recover it. 00:22:23.020 [2024-05-15 01:09:35.146941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.147156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.147184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.020 qpair failed and we were unable to recover it. 00:22:23.020 [2024-05-15 01:09:35.147473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.147709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.147736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.020 qpair failed and we were unable to recover it. 00:22:23.020 [2024-05-15 01:09:35.147919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.148137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.148165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.020 qpair failed and we were unable to recover it. 00:22:23.020 [2024-05-15 01:09:35.148361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.148552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.148581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.020 qpair failed and we were unable to recover it. 00:22:23.020 [2024-05-15 01:09:35.148766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.148949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.020 [2024-05-15 01:09:35.148979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.021 qpair failed and we were unable to recover it. 00:22:23.021 [2024-05-15 01:09:35.149165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.149349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.149378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.021 qpair failed and we were unable to recover it. 00:22:23.021 [2024-05-15 01:09:35.149555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.149763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.149791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.021 qpair failed and we were unable to recover it. 00:22:23.021 [2024-05-15 01:09:35.149998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.150208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.150235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.021 qpair failed and we were unable to recover it. 00:22:23.021 [2024-05-15 01:09:35.150411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.150626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.150653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.021 qpair failed and we were unable to recover it. 00:22:23.021 [2024-05-15 01:09:35.150864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.151039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.151067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.021 qpair failed and we were unable to recover it. 00:22:23.021 [2024-05-15 01:09:35.151273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.151461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.151486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.021 qpair failed and we were unable to recover it. 00:22:23.021 [2024-05-15 01:09:35.151649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.151857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.151884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.021 qpair failed and we were unable to recover it. 00:22:23.021 [2024-05-15 01:09:35.152138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.152352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.152380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.021 qpair failed and we were unable to recover it. 00:22:23.021 [2024-05-15 01:09:35.152586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.152796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.152828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.021 qpair failed and we were unable to recover it. 00:22:23.021 [2024-05-15 01:09:35.153045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.153265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.153293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.021 qpair failed and we were unable to recover it. 00:22:23.021 [2024-05-15 01:09:35.153511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.153752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.153780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.021 qpair failed and we were unable to recover it. 00:22:23.021 [2024-05-15 01:09:35.153990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.154172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.154199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.021 qpair failed and we were unable to recover it. 00:22:23.021 [2024-05-15 01:09:35.154377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.154582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.154610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.021 qpair failed and we were unable to recover it. 00:22:23.021 [2024-05-15 01:09:35.154815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.155024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.155053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.021 qpair failed and we were unable to recover it. 00:22:23.021 [2024-05-15 01:09:35.155238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.155477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.155504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.021 qpair failed and we were unable to recover it. 00:22:23.021 [2024-05-15 01:09:35.155682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.155847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.155874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.021 qpair failed and we were unable to recover it. 00:22:23.021 [2024-05-15 01:09:35.156102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.156339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.156367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.021 qpair failed and we were unable to recover it. 00:22:23.021 [2024-05-15 01:09:35.156572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.156810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.156837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.021 qpair failed and we were unable to recover it. 00:22:23.021 [2024-05-15 01:09:35.157023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.157180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.157205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.021 qpair failed and we were unable to recover it. 00:22:23.021 [2024-05-15 01:09:35.157499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.157708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.157735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.021 qpair failed and we were unable to recover it. 00:22:23.021 [2024-05-15 01:09:35.157942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.158174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.158199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.021 qpair failed and we were unable to recover it. 00:22:23.021 [2024-05-15 01:09:35.158443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.158657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.158684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.021 qpair failed and we were unable to recover it. 00:22:23.021 [2024-05-15 01:09:35.158878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.159122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.159151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.021 qpair failed and we were unable to recover it. 00:22:23.021 [2024-05-15 01:09:35.159367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.159564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.021 [2024-05-15 01:09:35.159591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.022 qpair failed and we were unable to recover it. 00:22:23.022 [2024-05-15 01:09:35.159773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.159986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.160014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.022 qpair failed and we were unable to recover it. 00:22:23.022 [2024-05-15 01:09:35.160220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.160425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.160452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.022 qpair failed and we were unable to recover it. 00:22:23.022 [2024-05-15 01:09:35.160668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.160907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.160942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.022 qpair failed and we were unable to recover it. 00:22:23.022 [2024-05-15 01:09:35.161120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.161289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.161316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.022 qpair failed and we were unable to recover it. 00:22:23.022 [2024-05-15 01:09:35.161516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.161726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.161753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.022 qpair failed and we were unable to recover it. 00:22:23.022 [2024-05-15 01:09:35.162003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.162181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.162209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.022 qpair failed and we were unable to recover it. 00:22:23.022 [2024-05-15 01:09:35.162381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.162596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.162624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.022 qpair failed and we were unable to recover it. 00:22:23.022 [2024-05-15 01:09:35.162861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.163080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.163106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.022 qpair failed and we were unable to recover it. 00:22:23.022 [2024-05-15 01:09:35.163350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.163554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.163583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.022 qpair failed and we were unable to recover it. 00:22:23.022 [2024-05-15 01:09:35.163771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.164051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.164080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.022 qpair failed and we were unable to recover it. 00:22:23.022 [2024-05-15 01:09:35.164322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.164559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.164586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.022 qpair failed and we were unable to recover it. 00:22:23.022 [2024-05-15 01:09:35.164795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.164973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.165002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.022 qpair failed and we were unable to recover it. 00:22:23.022 [2024-05-15 01:09:35.165219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.165434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.165461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.022 qpair failed and we were unable to recover it. 00:22:23.022 [2024-05-15 01:09:35.165652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.165863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.165891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.022 qpair failed and we were unable to recover it. 00:22:23.022 [2024-05-15 01:09:35.166108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.166279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.166307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.022 qpair failed and we were unable to recover it. 00:22:23.022 [2024-05-15 01:09:35.166492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.166738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.166779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.022 qpair failed and we were unable to recover it. 00:22:23.022 [2024-05-15 01:09:35.166998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.167193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.167218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.022 qpair failed and we were unable to recover it. 00:22:23.022 [2024-05-15 01:09:35.167503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.167727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.167752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.022 qpair failed and we were unable to recover it. 00:22:23.022 [2024-05-15 01:09:35.167914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.168102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.168130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.022 qpair failed and we were unable to recover it. 00:22:23.022 [2024-05-15 01:09:35.168350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.168564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.168592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.022 qpair failed and we were unable to recover it. 00:22:23.022 [2024-05-15 01:09:35.168775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.168976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.169002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.022 qpair failed and we were unable to recover it. 00:22:23.022 [2024-05-15 01:09:35.169212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.169394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.169422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.022 qpair failed and we were unable to recover it. 00:22:23.022 [2024-05-15 01:09:35.169629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.169796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.169821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.022 qpair failed and we were unable to recover it. 00:22:23.022 [2024-05-15 01:09:35.170062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.170237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.170264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.022 qpair failed and we were unable to recover it. 00:22:23.022 [2024-05-15 01:09:35.170458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.170621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.170645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.022 qpair failed and we were unable to recover it. 00:22:23.022 [2024-05-15 01:09:35.170862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.171072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.171101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.022 qpair failed and we were unable to recover it. 00:22:23.022 [2024-05-15 01:09:35.171298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.171494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.171519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.022 qpair failed and we were unable to recover it. 00:22:23.022 [2024-05-15 01:09:35.171742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.171948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.171979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.022 qpair failed and we were unable to recover it. 00:22:23.022 [2024-05-15 01:09:35.172193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.172402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.172426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.022 qpair failed and we were unable to recover it. 00:22:23.022 [2024-05-15 01:09:35.172641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.022 [2024-05-15 01:09:35.172851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.172880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.023 qpair failed and we were unable to recover it. 00:22:23.023 [2024-05-15 01:09:35.173097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.173277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.173306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.023 qpair failed and we were unable to recover it. 00:22:23.023 [2024-05-15 01:09:35.173505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.173679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.173709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.023 qpair failed and we were unable to recover it. 00:22:23.023 [2024-05-15 01:09:35.173921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.174140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.174168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.023 qpair failed and we were unable to recover it. 00:22:23.023 [2024-05-15 01:09:35.174406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.174615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.174642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.023 qpair failed and we were unable to recover it. 00:22:23.023 [2024-05-15 01:09:35.174819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.174993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.175021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.023 qpair failed and we were unable to recover it. 00:22:23.023 [2024-05-15 01:09:35.175223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.175395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.175422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.023 qpair failed and we were unable to recover it. 00:22:23.023 [2024-05-15 01:09:35.175636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.175872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.175899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.023 qpair failed and we were unable to recover it. 00:22:23.023 [2024-05-15 01:09:35.176092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.176303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.176331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.023 qpair failed and we were unable to recover it. 00:22:23.023 [2024-05-15 01:09:35.176543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.176750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.176777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.023 qpair failed and we were unable to recover it. 00:22:23.023 [2024-05-15 01:09:35.176966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.177151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.177178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.023 qpair failed and we were unable to recover it. 00:22:23.023 [2024-05-15 01:09:35.177396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.177584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.177608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.023 qpair failed and we were unable to recover it. 00:22:23.023 [2024-05-15 01:09:35.177789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.177968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.177996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.023 qpair failed and we were unable to recover it. 00:22:23.023 [2024-05-15 01:09:35.178271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.178549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.178579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.023 qpair failed and we were unable to recover it. 00:22:23.023 [2024-05-15 01:09:35.178790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.178962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.178991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.023 qpair failed and we were unable to recover it. 00:22:23.023 [2024-05-15 01:09:35.179184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.179421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.179448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.023 qpair failed and we were unable to recover it. 00:22:23.023 [2024-05-15 01:09:35.179635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.179794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.179818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.023 qpair failed and we were unable to recover it. 00:22:23.023 [2024-05-15 01:09:35.180042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.180230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.180255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.023 qpair failed and we were unable to recover it. 00:22:23.023 [2024-05-15 01:09:35.180441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.180626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.180655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.023 qpair failed and we were unable to recover it. 00:22:23.023 [2024-05-15 01:09:35.180871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.181105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.181133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.023 qpair failed and we were unable to recover it. 00:22:23.023 [2024-05-15 01:09:35.181306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.181511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.181539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.023 qpair failed and we were unable to recover it. 00:22:23.023 [2024-05-15 01:09:35.181768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.181946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.181986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.023 qpair failed and we were unable to recover it. 00:22:23.023 [2024-05-15 01:09:35.182275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.182484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.182511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.023 qpair failed and we were unable to recover it. 00:22:23.023 [2024-05-15 01:09:35.182696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.182936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.182965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.023 qpair failed and we were unable to recover it. 00:22:23.023 [2024-05-15 01:09:35.183177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.183334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.183359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.023 qpair failed and we were unable to recover it. 00:22:23.023 [2024-05-15 01:09:35.183519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.183727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.183755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.023 qpair failed and we were unable to recover it. 00:22:23.023 [2024-05-15 01:09:35.183944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.184227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.184256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.023 qpair failed and we were unable to recover it. 00:22:23.023 [2024-05-15 01:09:35.184489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.184674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.184699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.023 qpair failed and we were unable to recover it. 00:22:23.023 [2024-05-15 01:09:35.184892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.185089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.185118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.023 qpair failed and we were unable to recover it. 00:22:23.023 [2024-05-15 01:09:35.185329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.185514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.023 [2024-05-15 01:09:35.185538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.023 qpair failed and we were unable to recover it. 00:22:23.023 [2024-05-15 01:09:35.185720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.185928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.185962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.024 qpair failed and we were unable to recover it. 00:22:23.024 [2024-05-15 01:09:35.186177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.186358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.186387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.024 qpair failed and we were unable to recover it. 00:22:23.024 [2024-05-15 01:09:35.186624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.186832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.186861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.024 qpair failed and we were unable to recover it. 00:22:23.024 [2024-05-15 01:09:35.187135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.187384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.187412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.024 qpair failed and we were unable to recover it. 00:22:23.024 [2024-05-15 01:09:35.187653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.187876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.187903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.024 qpair failed and we were unable to recover it. 00:22:23.024 [2024-05-15 01:09:35.188094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.188275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.188305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.024 qpair failed and we were unable to recover it. 00:22:23.024 [2024-05-15 01:09:35.188515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.188752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.188780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.024 qpair failed and we were unable to recover it. 00:22:23.024 [2024-05-15 01:09:35.188981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.189193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.189221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.024 qpair failed and we were unable to recover it. 00:22:23.024 [2024-05-15 01:09:35.189415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.189579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.189623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.024 qpair failed and we were unable to recover it. 00:22:23.024 [2024-05-15 01:09:35.189809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.189990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.190019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.024 qpair failed and we were unable to recover it. 00:22:23.024 [2024-05-15 01:09:35.190298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.190505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.190532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.024 qpair failed and we were unable to recover it. 00:22:23.024 [2024-05-15 01:09:35.190716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.190871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.190914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.024 qpair failed and we were unable to recover it. 00:22:23.024 [2024-05-15 01:09:35.191158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.191313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.191338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.024 qpair failed and we were unable to recover it. 00:22:23.024 [2024-05-15 01:09:35.191492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.191773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.191800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.024 qpair failed and we were unable to recover it. 00:22:23.024 [2024-05-15 01:09:35.192082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.192266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.192293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.024 qpair failed and we were unable to recover it. 00:22:23.024 [2024-05-15 01:09:35.192483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.192688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.192716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.024 qpair failed and we were unable to recover it. 00:22:23.024 [2024-05-15 01:09:35.192893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.193109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.193138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.024 qpair failed and we were unable to recover it. 00:22:23.024 [2024-05-15 01:09:35.193380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.193534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.193559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.024 qpair failed and we were unable to recover it. 00:22:23.024 [2024-05-15 01:09:35.193769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.193950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.193981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.024 qpair failed and we were unable to recover it. 00:22:23.024 [2024-05-15 01:09:35.194198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.194438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.194466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.024 qpair failed and we were unable to recover it. 00:22:23.024 [2024-05-15 01:09:35.194677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.194913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.194975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.024 qpair failed and we were unable to recover it. 00:22:23.024 [2024-05-15 01:09:35.195169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.195390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.195415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.024 qpair failed and we were unable to recover it. 00:22:23.024 [2024-05-15 01:09:35.195696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.195904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.195939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.024 qpair failed and we were unable to recover it. 00:22:23.024 [2024-05-15 01:09:35.196122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.196333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.196361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.024 qpair failed and we were unable to recover it. 00:22:23.024 [2024-05-15 01:09:35.196595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.196771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.196798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.024 qpair failed and we were unable to recover it. 00:22:23.024 [2024-05-15 01:09:35.197033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.197243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.197272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.024 qpair failed and we were unable to recover it. 00:22:23.024 [2024-05-15 01:09:35.197477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.197670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.197695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.024 qpair failed and we were unable to recover it. 00:22:23.024 [2024-05-15 01:09:35.197858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.198080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.198108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.024 qpair failed and we were unable to recover it. 00:22:23.024 [2024-05-15 01:09:35.198285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.198492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.198522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.024 qpair failed and we were unable to recover it. 00:22:23.024 [2024-05-15 01:09:35.198739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.198927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.024 [2024-05-15 01:09:35.198961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.024 qpair failed and we were unable to recover it. 00:22:23.025 [2024-05-15 01:09:35.199132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.199367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.199395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.025 qpair failed and we were unable to recover it. 00:22:23.025 [2024-05-15 01:09:35.199575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.199765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.199790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.025 qpair failed and we were unable to recover it. 00:22:23.025 [2024-05-15 01:09:35.199950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.200128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.200157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.025 qpair failed and we were unable to recover it. 00:22:23.025 [2024-05-15 01:09:35.200370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.200573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.200601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.025 qpair failed and we were unable to recover it. 00:22:23.025 [2024-05-15 01:09:35.200811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.200973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.200998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.025 qpair failed and we were unable to recover it. 00:22:23.025 [2024-05-15 01:09:35.201185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.201366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.201393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.025 qpair failed and we were unable to recover it. 00:22:23.025 [2024-05-15 01:09:35.201573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.201785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.201813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.025 qpair failed and we were unable to recover it. 00:22:23.025 [2024-05-15 01:09:35.202023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.202199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.202226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.025 qpair failed and we were unable to recover it. 00:22:23.025 [2024-05-15 01:09:35.202420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.202609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.202633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.025 qpair failed and we were unable to recover it. 00:22:23.025 [2024-05-15 01:09:35.202908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.203144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.203173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.025 qpair failed and we were unable to recover it. 00:22:23.025 [2024-05-15 01:09:35.203385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.203566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.203594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.025 qpair failed and we were unable to recover it. 00:22:23.025 [2024-05-15 01:09:35.203782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.203945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.203988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.025 qpair failed and we were unable to recover it. 00:22:23.025 [2024-05-15 01:09:35.204229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.204390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.204415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.025 qpair failed and we were unable to recover it. 00:22:23.025 [2024-05-15 01:09:35.204614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.204784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.204811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.025 qpair failed and we were unable to recover it. 00:22:23.025 [2024-05-15 01:09:35.205014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.205192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.205221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.025 qpair failed and we were unable to recover it. 00:22:23.025 [2024-05-15 01:09:35.205432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.205611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.205639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.025 qpair failed and we were unable to recover it. 00:22:23.025 [2024-05-15 01:09:35.205850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.206065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.206094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.025 qpair failed and we were unable to recover it. 00:22:23.025 [2024-05-15 01:09:35.206281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.206494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.206522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.025 qpair failed and we were unable to recover it. 00:22:23.025 [2024-05-15 01:09:35.206707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.206875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.206900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.025 qpair failed and we were unable to recover it. 00:22:23.025 [2024-05-15 01:09:35.207100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.207306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.207330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.025 qpair failed and we were unable to recover it. 00:22:23.025 [2024-05-15 01:09:35.207492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.207728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.207756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.025 qpair failed and we were unable to recover it. 00:22:23.025 [2024-05-15 01:09:35.207948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.208126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.208151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.025 qpair failed and we were unable to recover it. 00:22:23.025 [2024-05-15 01:09:35.208367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.208582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.208606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.025 qpair failed and we were unable to recover it. 00:22:23.025 [2024-05-15 01:09:35.208849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.209124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.209149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.025 qpair failed and we were unable to recover it. 00:22:23.025 [2024-05-15 01:09:35.209363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.209540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.209567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.025 qpair failed and we were unable to recover it. 00:22:23.025 [2024-05-15 01:09:35.209805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.209994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.210019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.025 qpair failed and we were unable to recover it. 00:22:23.025 [2024-05-15 01:09:35.210209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.210416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.025 [2024-05-15 01:09:35.210448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.026 qpair failed and we were unable to recover it. 00:22:23.026 [2024-05-15 01:09:35.210621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.210794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.210824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.026 qpair failed and we were unable to recover it. 00:22:23.026 [2024-05-15 01:09:35.211024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.211231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.211261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.026 qpair failed and we were unable to recover it. 00:22:23.026 [2024-05-15 01:09:35.211501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.211678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.211706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.026 qpair failed and we were unable to recover it. 00:22:23.026 [2024-05-15 01:09:35.211888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.212072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.212101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.026 qpair failed and we were unable to recover it. 00:22:23.026 [2024-05-15 01:09:35.212317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.212484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.212511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.026 qpair failed and we were unable to recover it. 00:22:23.026 [2024-05-15 01:09:35.212691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.212904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.212937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.026 qpair failed and we were unable to recover it. 00:22:23.026 [2024-05-15 01:09:35.213121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.213289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.213314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.026 qpair failed and we were unable to recover it. 00:22:23.026 [2024-05-15 01:09:35.213478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.213644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.213668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.026 qpair failed and we were unable to recover it. 00:22:23.026 [2024-05-15 01:09:35.213842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.214061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.214089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.026 qpair failed and we were unable to recover it. 00:22:23.026 [2024-05-15 01:09:35.214269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.214472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.214504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.026 qpair failed and we were unable to recover it. 00:22:23.026 [2024-05-15 01:09:35.214718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.214884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.214909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.026 qpair failed and we were unable to recover it. 00:22:23.026 [2024-05-15 01:09:35.215131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.215344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.215370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.026 qpair failed and we were unable to recover it. 00:22:23.026 [2024-05-15 01:09:35.215575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.215754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.215780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.026 qpair failed and we were unable to recover it. 00:22:23.026 [2024-05-15 01:09:35.215959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.216130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.216156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.026 qpair failed and we were unable to recover it. 00:22:23.026 [2024-05-15 01:09:35.216339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.216587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.216611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.026 qpair failed and we were unable to recover it. 00:22:23.026 [2024-05-15 01:09:35.216802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.217050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.217078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.026 qpair failed and we were unable to recover it. 00:22:23.026 [2024-05-15 01:09:35.217342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.217666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.217724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.026 qpair failed and we were unable to recover it. 00:22:23.026 [2024-05-15 01:09:35.217938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.218154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.218181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.026 qpair failed and we were unable to recover it. 00:22:23.026 [2024-05-15 01:09:35.218413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.218627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.218655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.026 qpair failed and we were unable to recover it. 00:22:23.026 [2024-05-15 01:09:35.218890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.219109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.219144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.026 qpair failed and we were unable to recover it. 00:22:23.026 [2024-05-15 01:09:35.219345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.219621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.219645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.026 qpair failed and we were unable to recover it. 00:22:23.026 [2024-05-15 01:09:35.219894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.220084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.220109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.026 qpair failed and we were unable to recover it. 00:22:23.026 [2024-05-15 01:09:35.220278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.220519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.220546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.026 qpair failed and we were unable to recover it. 00:22:23.026 [2024-05-15 01:09:35.220876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.221115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.221140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.026 qpair failed and we were unable to recover it. 00:22:23.026 [2024-05-15 01:09:35.221297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.221461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.221485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.026 qpair failed and we were unable to recover it. 00:22:23.026 [2024-05-15 01:09:35.221700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.221938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.221965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.026 qpair failed and we were unable to recover it. 00:22:23.026 [2024-05-15 01:09:35.222137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.222349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.222376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.026 qpair failed and we were unable to recover it. 00:22:23.026 [2024-05-15 01:09:35.222615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.026 [2024-05-15 01:09:35.222850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.222877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.027 qpair failed and we were unable to recover it. 00:22:23.027 [2024-05-15 01:09:35.223089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.223295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.223322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.027 qpair failed and we were unable to recover it. 00:22:23.027 [2024-05-15 01:09:35.223496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.223675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.223708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.027 qpair failed and we were unable to recover it. 00:22:23.027 [2024-05-15 01:09:35.223894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.224088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.224116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.027 qpair failed and we were unable to recover it. 00:22:23.027 [2024-05-15 01:09:35.224325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.224539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.224563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.027 qpair failed and we were unable to recover it. 00:22:23.027 [2024-05-15 01:09:35.224797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.225086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.225114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.027 qpair failed and we were unable to recover it. 00:22:23.027 [2024-05-15 01:09:35.225305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.225537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.225565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.027 qpair failed and we were unable to recover it. 00:22:23.027 [2024-05-15 01:09:35.225789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.225946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.225971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.027 qpair failed and we were unable to recover it. 00:22:23.027 [2024-05-15 01:09:35.226181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.226420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.226447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.027 qpair failed and we were unable to recover it. 00:22:23.027 [2024-05-15 01:09:35.226619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.226835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.226863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.027 qpair failed and we were unable to recover it. 00:22:23.027 [2024-05-15 01:09:35.227095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.227273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.227301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.027 qpair failed and we were unable to recover it. 00:22:23.027 [2024-05-15 01:09:35.227488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.227694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.227721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.027 qpair failed and we were unable to recover it. 00:22:23.027 [2024-05-15 01:09:35.227941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.228119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.228147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.027 qpair failed and we were unable to recover it. 00:22:23.027 [2024-05-15 01:09:35.228378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.228607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.228633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.027 qpair failed and we were unable to recover it. 00:22:23.027 [2024-05-15 01:09:35.228822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.229002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.229032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.027 qpair failed and we were unable to recover it. 00:22:23.027 [2024-05-15 01:09:35.229309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.229522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.229550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.027 qpair failed and we were unable to recover it. 00:22:23.027 [2024-05-15 01:09:35.229835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.230077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.230102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.027 qpair failed and we were unable to recover it. 00:22:23.027 [2024-05-15 01:09:35.230297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.230509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.230539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.027 qpair failed and we were unable to recover it. 00:22:23.027 [2024-05-15 01:09:35.230753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.230968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.230997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.027 qpair failed and we were unable to recover it. 00:22:23.027 [2024-05-15 01:09:35.231215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.231381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.231405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.027 qpair failed and we were unable to recover it. 00:22:23.027 [2024-05-15 01:09:35.231596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.231814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.231841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.027 qpair failed and we were unable to recover it. 00:22:23.027 [2024-05-15 01:09:35.232029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.232264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.232290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.027 qpair failed and we were unable to recover it. 00:22:23.027 [2024-05-15 01:09:35.232461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.232653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.232677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.027 qpair failed and we were unable to recover it. 00:22:23.027 [2024-05-15 01:09:35.232927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.233113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.233139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.027 qpair failed and we were unable to recover it. 00:22:23.027 [2024-05-15 01:09:35.233320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.233489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.233512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.027 qpair failed and we were unable to recover it. 00:22:23.027 [2024-05-15 01:09:35.233740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.233910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.233945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.027 qpair failed and we were unable to recover it. 00:22:23.027 [2024-05-15 01:09:35.234162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.234484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.234511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.027 qpair failed and we were unable to recover it. 00:22:23.027 [2024-05-15 01:09:35.234695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.234904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.234939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.027 qpair failed and we were unable to recover it. 00:22:23.027 [2024-05-15 01:09:35.235128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.235335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.235360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.027 qpair failed and we were unable to recover it. 00:22:23.027 [2024-05-15 01:09:35.235598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.235835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.027 [2024-05-15 01:09:35.235860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.028 qpair failed and we were unable to recover it. 00:22:23.028 [2024-05-15 01:09:35.236043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.236264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.236292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.028 qpair failed and we were unable to recover it. 00:22:23.028 [2024-05-15 01:09:35.236475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.236680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.236707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.028 qpair failed and we were unable to recover it. 00:22:23.028 [2024-05-15 01:09:35.236918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.237136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.237164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.028 qpair failed and we were unable to recover it. 00:22:23.028 [2024-05-15 01:09:35.237394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.237583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.237608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.028 qpair failed and we were unable to recover it. 00:22:23.028 [2024-05-15 01:09:35.237825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.238012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.238040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.028 qpair failed and we were unable to recover it. 00:22:23.028 [2024-05-15 01:09:35.238287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.238483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.238511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.028 qpair failed and we were unable to recover it. 00:22:23.028 [2024-05-15 01:09:35.238748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.238964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.238992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.028 qpair failed and we were unable to recover it. 00:22:23.028 [2024-05-15 01:09:35.239234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.239419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.239444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.028 qpair failed and we were unable to recover it. 00:22:23.028 [2024-05-15 01:09:35.239631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.239841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.239871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.028 qpair failed and we were unable to recover it. 00:22:23.028 [2024-05-15 01:09:35.240089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.240303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.240331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.028 qpair failed and we were unable to recover it. 00:22:23.028 [2024-05-15 01:09:35.240540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.240709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.240737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.028 qpair failed and we were unable to recover it. 00:22:23.028 [2024-05-15 01:09:35.240914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.241140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.241165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.028 qpair failed and we were unable to recover it. 00:22:23.028 [2024-05-15 01:09:35.241350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.241565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.241592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.028 qpair failed and we were unable to recover it. 00:22:23.028 [2024-05-15 01:09:35.241806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.242029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.242056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.028 qpair failed and we were unable to recover it. 00:22:23.028 [2024-05-15 01:09:35.242265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.242459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.242487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.028 qpair failed and we were unable to recover it. 00:22:23.028 [2024-05-15 01:09:35.242694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.242867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.242895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.028 qpair failed and we were unable to recover it. 00:22:23.028 [2024-05-15 01:09:35.243114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.243314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.243338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.028 qpair failed and we were unable to recover it. 00:22:23.028 [2024-05-15 01:09:35.243540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.243780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.243808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.028 qpair failed and we were unable to recover it. 00:22:23.028 [2024-05-15 01:09:35.244050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.244417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.244478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.028 qpair failed and we were unable to recover it. 00:22:23.028 [2024-05-15 01:09:35.244708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.244912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.244948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.028 qpair failed and we were unable to recover it. 00:22:23.028 [2024-05-15 01:09:35.245182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.245389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.245413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.028 qpair failed and we were unable to recover it. 00:22:23.028 [2024-05-15 01:09:35.245695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.245984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.246013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.028 qpair failed and we were unable to recover it. 00:22:23.028 [2024-05-15 01:09:35.246225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.246534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.246588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.028 qpair failed and we were unable to recover it. 00:22:23.028 [2024-05-15 01:09:35.246799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.247016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.247046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.028 qpair failed and we were unable to recover it. 00:22:23.028 [2024-05-15 01:09:35.247255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.247440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.247467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.028 qpair failed and we were unable to recover it. 00:22:23.028 [2024-05-15 01:09:35.247669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.247914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.247950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.028 qpair failed and we were unable to recover it. 00:22:23.028 [2024-05-15 01:09:35.248167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.248440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.248465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.028 qpair failed and we were unable to recover it. 00:22:23.028 [2024-05-15 01:09:35.248681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.248894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.248921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.028 qpair failed and we were unable to recover it. 00:22:23.028 [2024-05-15 01:09:35.249176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.249384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.249411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.028 qpair failed and we were unable to recover it. 00:22:23.028 [2024-05-15 01:09:35.249589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.028 [2024-05-15 01:09:35.249796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.249824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.029 qpair failed and we were unable to recover it. 00:22:23.029 [2024-05-15 01:09:35.250033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.250246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.250274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.029 qpair failed and we were unable to recover it. 00:22:23.029 [2024-05-15 01:09:35.250509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.250688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.250715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.029 qpair failed and we were unable to recover it. 00:22:23.029 [2024-05-15 01:09:35.250927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.251113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.251140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.029 qpair failed and we were unable to recover it. 00:22:23.029 [2024-05-15 01:09:35.251358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.251544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.251568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.029 qpair failed and we were unable to recover it. 00:22:23.029 [2024-05-15 01:09:35.251752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.251953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.251981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.029 qpair failed and we were unable to recover it. 00:22:23.029 [2024-05-15 01:09:35.252196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.252389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.252419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.029 qpair failed and we were unable to recover it. 00:22:23.029 [2024-05-15 01:09:35.252613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.252826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.252854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.029 qpair failed and we were unable to recover it. 00:22:23.029 [2024-05-15 01:09:35.253078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.253240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.253265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.029 qpair failed and we were unable to recover it. 00:22:23.029 [2024-05-15 01:09:35.253557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.253767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.253795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.029 qpair failed and we were unable to recover it. 00:22:23.029 [2024-05-15 01:09:35.254028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.254231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.254298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.029 qpair failed and we were unable to recover it. 00:22:23.029 [2024-05-15 01:09:35.254544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.254751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.254778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.029 qpair failed and we were unable to recover it. 00:22:23.029 [2024-05-15 01:09:35.255018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.255230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.255257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.029 qpair failed and we were unable to recover it. 00:22:23.029 [2024-05-15 01:09:35.255504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.255739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.255767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.029 qpair failed and we were unable to recover it. 00:22:23.029 [2024-05-15 01:09:35.256061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.256267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.256294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.029 qpair failed and we were unable to recover it. 00:22:23.029 [2024-05-15 01:09:35.256534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.256746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.256771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.029 qpair failed and we were unable to recover it. 00:22:23.029 [2024-05-15 01:09:35.256964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.257130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.257156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.029 qpair failed and we were unable to recover it. 00:22:23.029 [2024-05-15 01:09:35.257340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.257575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.257603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.029 qpair failed and we were unable to recover it. 00:22:23.029 [2024-05-15 01:09:35.257819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.258034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.258063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.029 qpair failed and we were unable to recover it. 00:22:23.029 [2024-05-15 01:09:35.258271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.258488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.258513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.029 qpair failed and we were unable to recover it. 00:22:23.029 [2024-05-15 01:09:35.258749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.258986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.259015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.029 qpair failed and we were unable to recover it. 00:22:23.029 [2024-05-15 01:09:35.259250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.259664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.259723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.029 qpair failed and we were unable to recover it. 00:22:23.029 [2024-05-15 01:09:35.259958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.260151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.260179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.029 qpair failed and we were unable to recover it. 00:22:23.029 [2024-05-15 01:09:35.260389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.260592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.260616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.029 qpair failed and we were unable to recover it. 00:22:23.029 [2024-05-15 01:09:35.260813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.261057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.261087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.029 qpair failed and we were unable to recover it. 00:22:23.029 [2024-05-15 01:09:35.261401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.261637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.261664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.029 qpair failed and we were unable to recover it. 00:22:23.029 [2024-05-15 01:09:35.261882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.262082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.262108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.029 qpair failed and we were unable to recover it. 00:22:23.029 [2024-05-15 01:09:35.262358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.262688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.262743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.029 qpair failed and we were unable to recover it. 00:22:23.029 [2024-05-15 01:09:35.263010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.263225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.263253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.029 qpair failed and we were unable to recover it. 00:22:23.029 [2024-05-15 01:09:35.263507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.263693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.029 [2024-05-15 01:09:35.263718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.029 qpair failed and we were unable to recover it. 00:22:23.030 [2024-05-15 01:09:35.263964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.264157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.264185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.030 qpair failed and we were unable to recover it. 00:22:23.030 [2024-05-15 01:09:35.264424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.264631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.264658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.030 qpair failed and we were unable to recover it. 00:22:23.030 [2024-05-15 01:09:35.264865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.265078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.265106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.030 qpair failed and we were unable to recover it. 00:22:23.030 [2024-05-15 01:09:35.265349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.265541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.265566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.030 qpair failed and we were unable to recover it. 00:22:23.030 [2024-05-15 01:09:35.265754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.265915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.265949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.030 qpair failed and we were unable to recover it. 00:22:23.030 [2024-05-15 01:09:35.266230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.266450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.266478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.030 qpair failed and we were unable to recover it. 00:22:23.030 [2024-05-15 01:09:35.266682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.266937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.266980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.030 qpair failed and we were unable to recover it. 00:22:23.030 [2024-05-15 01:09:35.267212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.267397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.267424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.030 qpair failed and we were unable to recover it. 00:22:23.030 [2024-05-15 01:09:35.267634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.267814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.267838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.030 qpair failed and we were unable to recover it. 00:22:23.030 [2024-05-15 01:09:35.268024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.268234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.268261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.030 qpair failed and we were unable to recover it. 00:22:23.030 [2024-05-15 01:09:35.268498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.268771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.268820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.030 qpair failed and we were unable to recover it. 00:22:23.030 [2024-05-15 01:09:35.269032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.269212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.269242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.030 qpair failed and we were unable to recover it. 00:22:23.030 [2024-05-15 01:09:35.269456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.269637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.269665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.030 qpair failed and we were unable to recover it. 00:22:23.030 [2024-05-15 01:09:35.269914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.270139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.270167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.030 qpair failed and we were unable to recover it. 00:22:23.030 [2024-05-15 01:09:35.270385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.270601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.270628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.030 qpair failed and we were unable to recover it. 00:22:23.030 [2024-05-15 01:09:35.270810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.270991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.271021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.030 qpair failed and we were unable to recover it. 00:22:23.030 [2024-05-15 01:09:35.271226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.271496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.271545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.030 qpair failed and we were unable to recover it. 00:22:23.030 [2024-05-15 01:09:35.271748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.271958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.271987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.030 qpair failed and we were unable to recover it. 00:22:23.030 [2024-05-15 01:09:35.272265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.272504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.272531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.030 qpair failed and we were unable to recover it. 00:22:23.030 [2024-05-15 01:09:35.272834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.273124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.273149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.030 qpair failed and we were unable to recover it. 00:22:23.030 [2024-05-15 01:09:35.273358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.273629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.273659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.030 qpair failed and we were unable to recover it. 00:22:23.030 [2024-05-15 01:09:35.273871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.274055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.274083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.030 qpair failed and we were unable to recover it. 00:22:23.030 [2024-05-15 01:09:35.274276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.274471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.274496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.030 qpair failed and we were unable to recover it. 00:22:23.030 [2024-05-15 01:09:35.274712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.274921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.274957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.030 qpair failed and we were unable to recover it. 00:22:23.030 [2024-05-15 01:09:35.275194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.275426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.030 [2024-05-15 01:09:35.275454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.031 qpair failed and we were unable to recover it. 00:22:23.031 [2024-05-15 01:09:35.275663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.275876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.275905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.031 qpair failed and we were unable to recover it. 00:22:23.031 [2024-05-15 01:09:35.276114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.276332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.276357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.031 qpair failed and we were unable to recover it. 00:22:23.031 [2024-05-15 01:09:35.276674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.276940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.276969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.031 qpair failed and we were unable to recover it. 00:22:23.031 [2024-05-15 01:09:35.277168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.277333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.277359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.031 qpair failed and we were unable to recover it. 00:22:23.031 [2024-05-15 01:09:35.277570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.277781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.277808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.031 qpair failed and we were unable to recover it. 00:22:23.031 [2024-05-15 01:09:35.278012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.278274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.278302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.031 qpair failed and we were unable to recover it. 00:22:23.031 [2024-05-15 01:09:35.278560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.278770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.278798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.031 qpair failed and we were unable to recover it. 00:22:23.031 [2024-05-15 01:09:35.279005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.279237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.279293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.031 qpair failed and we were unable to recover it. 00:22:23.031 [2024-05-15 01:09:35.279504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.279678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.279706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.031 qpair failed and we were unable to recover it. 00:22:23.031 [2024-05-15 01:09:35.279952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.280141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.280170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.031 qpair failed and we were unable to recover it. 00:22:23.031 [2024-05-15 01:09:35.280415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.280631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.280656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.031 qpair failed and we were unable to recover it. 00:22:23.031 [2024-05-15 01:09:35.280911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.281101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.281129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.031 qpair failed and we were unable to recover it. 00:22:23.031 [2024-05-15 01:09:35.281339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.281546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.281575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.031 qpair failed and we were unable to recover it. 00:22:23.031 [2024-05-15 01:09:35.281848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.282044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.282069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.031 qpair failed and we were unable to recover it. 00:22:23.031 [2024-05-15 01:09:35.282361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.282752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.282800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.031 qpair failed and we were unable to recover it. 00:22:23.031 [2024-05-15 01:09:35.283015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.283191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.283218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.031 qpair failed and we were unable to recover it. 00:22:23.031 [2024-05-15 01:09:35.283426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.283640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.283664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.031 qpair failed and we were unable to recover it. 00:22:23.031 [2024-05-15 01:09:35.283880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.284125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.284153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.031 qpair failed and we were unable to recover it. 00:22:23.031 [2024-05-15 01:09:35.284400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.284617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.284641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.031 qpair failed and we were unable to recover it. 00:22:23.031 [2024-05-15 01:09:35.284803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.285025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.285058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.031 qpair failed and we were unable to recover it. 00:22:23.031 [2024-05-15 01:09:35.285267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.285479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.285509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.031 qpair failed and we were unable to recover it. 00:22:23.031 [2024-05-15 01:09:35.285693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.285951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.285980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.031 qpair failed and we were unable to recover it. 00:22:23.031 [2024-05-15 01:09:35.286219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.286427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.286454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.031 qpair failed and we were unable to recover it. 00:22:23.031 [2024-05-15 01:09:35.286664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.286822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.286846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.031 qpair failed and we were unable to recover it. 00:22:23.031 [2024-05-15 01:09:35.287027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.287242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.287283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.031 qpair failed and we were unable to recover it. 00:22:23.031 [2024-05-15 01:09:35.287494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.287683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.287722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.031 qpair failed and we were unable to recover it. 00:22:23.031 [2024-05-15 01:09:35.287922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.288095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.288120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.031 qpair failed and we were unable to recover it. 00:22:23.031 [2024-05-15 01:09:35.288347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.288501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.288528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.031 qpair failed and we were unable to recover it. 00:22:23.031 [2024-05-15 01:09:35.288714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.288891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.288914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.031 qpair failed and we were unable to recover it. 00:22:23.031 [2024-05-15 01:09:35.289150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.031 [2024-05-15 01:09:35.289364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.289393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.032 qpair failed and we were unable to recover it. 00:22:23.032 [2024-05-15 01:09:35.289732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.289971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.290011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.032 qpair failed and we were unable to recover it. 00:22:23.032 [2024-05-15 01:09:35.290186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.290390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.290417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.032 qpair failed and we were unable to recover it. 00:22:23.032 [2024-05-15 01:09:35.290629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.290832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.290861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.032 qpair failed and we were unable to recover it. 00:22:23.032 [2024-05-15 01:09:35.291103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.291316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.291343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.032 qpair failed and we were unable to recover it. 00:22:23.032 [2024-05-15 01:09:35.291599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.291842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.291869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.032 qpair failed and we were unable to recover it. 00:22:23.032 [2024-05-15 01:09:35.292110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.292309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.292336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.032 qpair failed and we were unable to recover it. 00:22:23.032 [2024-05-15 01:09:35.292573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.292769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.292796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.032 qpair failed and we were unable to recover it. 00:22:23.032 [2024-05-15 01:09:35.293014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.293212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.293236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.032 qpair failed and we were unable to recover it. 00:22:23.032 [2024-05-15 01:09:35.293399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.293608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.293636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.032 qpair failed and we were unable to recover it. 00:22:23.032 [2024-05-15 01:09:35.293819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.294005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.294039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.032 qpair failed and we were unable to recover it. 00:22:23.032 [2024-05-15 01:09:35.294280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.294510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.294537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.032 qpair failed and we were unable to recover it. 00:22:23.032 [2024-05-15 01:09:35.294743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.294965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.294994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.032 qpair failed and we were unable to recover it. 00:22:23.032 [2024-05-15 01:09:35.295212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.295381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.295404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.032 qpair failed and we were unable to recover it. 00:22:23.032 [2024-05-15 01:09:35.295642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.295858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.295886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.032 qpair failed and we were unable to recover it. 00:22:23.032 [2024-05-15 01:09:35.296098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.296285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.296313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.032 qpair failed and we were unable to recover it. 00:22:23.032 [2024-05-15 01:09:35.296530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.296743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.296772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.032 qpair failed and we were unable to recover it. 00:22:23.032 [2024-05-15 01:09:35.296982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.297154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.297183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.032 qpair failed and we were unable to recover it. 00:22:23.032 [2024-05-15 01:09:35.297372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.297546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.297569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.032 qpair failed and we were unable to recover it. 00:22:23.032 [2024-05-15 01:09:35.297765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.298020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.298045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.032 qpair failed and we were unable to recover it. 00:22:23.032 [2024-05-15 01:09:35.298330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.298742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.298795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.032 qpair failed and we were unable to recover it. 00:22:23.032 [2024-05-15 01:09:35.299026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.299200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.299224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.032 qpair failed and we were unable to recover it. 00:22:23.032 [2024-05-15 01:09:35.299482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.299837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.299888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.032 qpair failed and we were unable to recover it. 00:22:23.032 [2024-05-15 01:09:35.300150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.300362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.300390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.032 qpair failed and we were unable to recover it. 00:22:23.032 [2024-05-15 01:09:35.300577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.300749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.300777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.032 qpair failed and we were unable to recover it. 00:22:23.032 [2024-05-15 01:09:35.300996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.301209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.301236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.032 qpair failed and we were unable to recover it. 00:22:23.032 [2024-05-15 01:09:35.301409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.301642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.301666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.032 qpair failed and we were unable to recover it. 00:22:23.032 [2024-05-15 01:09:35.301860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.302084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.302110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.032 qpair failed and we were unable to recover it. 00:22:23.032 [2024-05-15 01:09:35.302325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.302531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.302558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.032 qpair failed and we were unable to recover it. 00:22:23.032 [2024-05-15 01:09:35.302793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.303016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.032 [2024-05-15 01:09:35.303042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.032 qpair failed and we were unable to recover it. 00:22:23.032 [2024-05-15 01:09:35.303225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.303462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.303489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.033 qpair failed and we were unable to recover it. 00:22:23.033 [2024-05-15 01:09:35.303722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.303959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.303987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.033 qpair failed and we were unable to recover it. 00:22:23.033 [2024-05-15 01:09:35.304227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.304464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.304491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.033 qpair failed and we were unable to recover it. 00:22:23.033 [2024-05-15 01:09:35.304732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.304923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.304961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.033 qpair failed and we were unable to recover it. 00:22:23.033 [2024-05-15 01:09:35.305150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.305360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.305385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.033 qpair failed and we were unable to recover it. 00:22:23.033 [2024-05-15 01:09:35.305618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.305825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.305854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.033 qpair failed and we were unable to recover it. 00:22:23.033 [2024-05-15 01:09:35.306075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.306307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.306335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.033 qpair failed and we were unable to recover it. 00:22:23.033 [2024-05-15 01:09:35.306509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.306791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.306847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.033 qpair failed and we were unable to recover it. 00:22:23.033 [2024-05-15 01:09:35.307074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.307286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.307311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.033 qpair failed and we were unable to recover it. 00:22:23.033 [2024-05-15 01:09:35.307525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.307753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.307780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.033 qpair failed and we were unable to recover it. 00:22:23.033 [2024-05-15 01:09:35.308012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.308211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.308239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.033 qpair failed and we were unable to recover it. 00:22:23.033 [2024-05-15 01:09:35.308429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.308635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.308663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.033 qpair failed and we were unable to recover it. 00:22:23.033 [2024-05-15 01:09:35.308841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.309047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.309077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.033 qpair failed and we were unable to recover it. 00:22:23.033 [2024-05-15 01:09:35.309291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.309497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.309525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.033 qpair failed and we were unable to recover it. 00:22:23.033 [2024-05-15 01:09:35.309739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.309948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.309978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.033 qpair failed and we were unable to recover it. 00:22:23.033 [2024-05-15 01:09:35.310166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.310331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.310355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.033 qpair failed and we were unable to recover it. 00:22:23.033 [2024-05-15 01:09:35.310608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.310826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.310853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.033 qpair failed and we were unable to recover it. 00:22:23.033 [2024-05-15 01:09:35.311090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.311266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.311295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.033 qpair failed and we were unable to recover it. 00:22:23.033 [2024-05-15 01:09:35.311495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.311713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.311741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.033 qpair failed and we were unable to recover it. 00:22:23.033 [2024-05-15 01:09:35.311952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.312189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.312216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.033 qpair failed and we were unable to recover it. 00:22:23.033 [2024-05-15 01:09:35.312417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.312717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.312783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.033 qpair failed and we were unable to recover it. 00:22:23.033 [2024-05-15 01:09:35.313001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.313193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.313222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.033 qpair failed and we were unable to recover it. 00:22:23.033 [2024-05-15 01:09:35.313451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.313719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.313743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.033 qpair failed and we were unable to recover it. 00:22:23.033 [2024-05-15 01:09:35.313994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.314195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.314223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.033 qpair failed and we were unable to recover it. 00:22:23.033 [2024-05-15 01:09:35.314406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.314587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.314615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.033 qpair failed and we were unable to recover it. 00:22:23.033 [2024-05-15 01:09:35.314837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.315088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.315117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.033 qpair failed and we were unable to recover it. 00:22:23.033 [2024-05-15 01:09:35.315301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.315465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.315490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.033 qpair failed and we were unable to recover it. 00:22:23.033 [2024-05-15 01:09:35.315672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.315861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.315890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.033 qpair failed and we were unable to recover it. 00:22:23.033 [2024-05-15 01:09:35.316086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.316317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.316347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.033 qpair failed and we were unable to recover it. 00:22:23.033 [2024-05-15 01:09:35.316535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.033 [2024-05-15 01:09:35.316733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.316761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.034 qpair failed and we were unable to recover it. 00:22:23.034 [2024-05-15 01:09:35.316948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.317144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.317168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.034 qpair failed and we were unable to recover it. 00:22:23.034 [2024-05-15 01:09:35.317381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.317578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.317605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.034 qpair failed and we were unable to recover it. 00:22:23.034 [2024-05-15 01:09:35.317812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.318021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.318046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.034 qpair failed and we were unable to recover it. 00:22:23.034 [2024-05-15 01:09:35.318274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.318453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.318483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.034 qpair failed and we were unable to recover it. 00:22:23.034 [2024-05-15 01:09:35.318704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.318910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.318946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.034 qpair failed and we were unable to recover it. 00:22:23.034 [2024-05-15 01:09:35.319127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.319327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.319354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.034 qpair failed and we were unable to recover it. 00:22:23.034 [2024-05-15 01:09:35.319544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.319734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.319762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.034 qpair failed and we were unable to recover it. 00:22:23.034 [2024-05-15 01:09:35.319979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.320182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.320209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.034 qpair failed and we were unable to recover it. 00:22:23.034 [2024-05-15 01:09:35.320402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.320630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.320658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.034 qpair failed and we were unable to recover it. 00:22:23.034 [2024-05-15 01:09:35.320877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.321081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.321109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.034 qpair failed and we were unable to recover it. 00:22:23.034 [2024-05-15 01:09:35.321324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.321555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.321579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.034 qpair failed and we were unable to recover it. 00:22:23.034 [2024-05-15 01:09:35.321793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.322005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.322035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.034 qpair failed and we were unable to recover it. 00:22:23.034 [2024-05-15 01:09:35.322264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.322483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.322510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.034 qpair failed and we were unable to recover it. 00:22:23.034 [2024-05-15 01:09:35.322813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.323064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.323099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.034 qpair failed and we were unable to recover it. 00:22:23.034 [2024-05-15 01:09:35.323303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.323549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.323573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.034 qpair failed and we were unable to recover it. 00:22:23.034 [2024-05-15 01:09:35.323790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.324009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.324035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.034 qpair failed and we were unable to recover it. 00:22:23.034 [2024-05-15 01:09:35.324193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.324400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.324486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.034 qpair failed and we were unable to recover it. 00:22:23.034 [2024-05-15 01:09:35.324729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.324951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.324977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.034 qpair failed and we were unable to recover it. 00:22:23.034 [2024-05-15 01:09:35.325170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.325390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.325441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.034 qpair failed and we were unable to recover it. 00:22:23.034 [2024-05-15 01:09:35.325651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.325886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.325913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.034 qpair failed and we were unable to recover it. 00:22:23.034 [2024-05-15 01:09:35.326135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.326340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.326368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.034 qpair failed and we were unable to recover it. 00:22:23.034 [2024-05-15 01:09:35.326591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.326773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.326798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.034 qpair failed and we were unable to recover it. 00:22:23.034 [2024-05-15 01:09:35.326984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.327216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.327241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.034 qpair failed and we were unable to recover it. 00:22:23.034 [2024-05-15 01:09:35.327453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.327659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.034 [2024-05-15 01:09:35.327685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.034 qpair failed and we were unable to recover it. 00:22:23.035 [2024-05-15 01:09:35.327924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.328150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.328175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.035 qpair failed and we were unable to recover it. 00:22:23.035 [2024-05-15 01:09:35.328385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.328583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.328608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.035 qpair failed and we were unable to recover it. 00:22:23.035 [2024-05-15 01:09:35.328796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.329011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.329037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.035 qpair failed and we were unable to recover it. 00:22:23.035 [2024-05-15 01:09:35.329246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.329475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.329502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.035 qpair failed and we were unable to recover it. 00:22:23.035 [2024-05-15 01:09:35.329706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.329901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.329926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.035 qpair failed and we were unable to recover it. 00:22:23.035 [2024-05-15 01:09:35.330189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.330422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.330449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.035 qpair failed and we were unable to recover it. 00:22:23.035 [2024-05-15 01:09:35.330655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.330843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.330867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.035 qpair failed and we were unable to recover it. 00:22:23.035 [2024-05-15 01:09:35.331078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.331257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.331282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.035 qpair failed and we were unable to recover it. 00:22:23.035 [2024-05-15 01:09:35.331478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.331690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.331715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.035 qpair failed and we were unable to recover it. 00:22:23.035 [2024-05-15 01:09:35.331904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.332099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.332128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.035 qpair failed and we were unable to recover it. 00:22:23.035 [2024-05-15 01:09:35.332334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.332515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.332542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.035 qpair failed and we were unable to recover it. 00:22:23.035 [2024-05-15 01:09:35.332779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.332988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.333014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.035 qpair failed and we were unable to recover it. 00:22:23.035 [2024-05-15 01:09:35.333278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.333572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.333599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.035 qpair failed and we were unable to recover it. 00:22:23.035 [2024-05-15 01:09:35.333838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.333996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.334021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.035 qpair failed and we were unable to recover it. 00:22:23.035 [2024-05-15 01:09:35.334206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.334486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.334538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.035 qpair failed and we were unable to recover it. 00:22:23.035 [2024-05-15 01:09:35.334775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.334957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.334985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.035 qpair failed and we were unable to recover it. 00:22:23.035 [2024-05-15 01:09:35.335231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.335471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.335498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.035 qpair failed and we were unable to recover it. 00:22:23.035 [2024-05-15 01:09:35.335760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.335998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.336024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.035 qpair failed and we were unable to recover it. 00:22:23.035 [2024-05-15 01:09:35.336271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.336679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.336734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.035 qpair failed and we were unable to recover it. 00:22:23.035 [2024-05-15 01:09:35.336955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.337142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.337182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.035 qpair failed and we were unable to recover it. 00:22:23.035 [2024-05-15 01:09:35.337399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.337578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.337605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.035 qpair failed and we were unable to recover it. 00:22:23.035 [2024-05-15 01:09:35.337820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.338028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.338057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.035 qpair failed and we were unable to recover it. 00:22:23.035 [2024-05-15 01:09:35.338276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.338544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.338588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.035 qpair failed and we were unable to recover it. 00:22:23.035 [2024-05-15 01:09:35.338825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.339041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.339069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.035 qpair failed and we were unable to recover it. 00:22:23.035 [2024-05-15 01:09:35.339248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.339510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.339560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.035 qpair failed and we were unable to recover it. 00:22:23.035 [2024-05-15 01:09:35.339908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.340187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.340215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.035 qpair failed and we were unable to recover it. 00:22:23.035 [2024-05-15 01:09:35.340549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.340994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.341023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.035 qpair failed and we were unable to recover it. 00:22:23.035 [2024-05-15 01:09:35.341240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.341454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.341481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.035 qpair failed and we were unable to recover it. 00:22:23.035 [2024-05-15 01:09:35.341709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.341919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.341953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.035 qpair failed and we were unable to recover it. 00:22:23.035 [2024-05-15 01:09:35.342134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.035 [2024-05-15 01:09:35.342369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.342396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.036 qpair failed and we were unable to recover it. 00:22:23.036 [2024-05-15 01:09:35.342638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.342851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.342876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.036 qpair failed and we were unable to recover it. 00:22:23.036 [2024-05-15 01:09:35.343141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.343355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.343383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.036 qpair failed and we were unable to recover it. 00:22:23.036 [2024-05-15 01:09:35.343574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.343776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.343804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.036 qpair failed and we were unable to recover it. 00:22:23.036 [2024-05-15 01:09:35.344050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.344251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.344276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.036 qpair failed and we were unable to recover it. 00:22:23.036 [2024-05-15 01:09:35.344473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.344684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.344711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.036 qpair failed and we were unable to recover it. 00:22:23.036 [2024-05-15 01:09:35.344921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.345140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.345169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.036 qpair failed and we were unable to recover it. 00:22:23.036 [2024-05-15 01:09:35.345407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.345582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.345609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.036 qpair failed and we were unable to recover it. 00:22:23.036 [2024-05-15 01:09:35.345817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.346066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.346092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.036 qpair failed and we were unable to recover it. 00:22:23.036 [2024-05-15 01:09:35.346279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.346518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.346546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.036 qpair failed and we were unable to recover it. 00:22:23.036 [2024-05-15 01:09:35.346785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.346984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.347012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.036 qpair failed and we were unable to recover it. 00:22:23.036 [2024-05-15 01:09:35.347216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.347433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.347460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.036 qpair failed and we were unable to recover it. 00:22:23.036 [2024-05-15 01:09:35.347635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.347873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.347898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.036 qpair failed and we were unable to recover it. 00:22:23.036 [2024-05-15 01:09:35.348100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.348313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.348340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.036 qpair failed and we were unable to recover it. 00:22:23.036 [2024-05-15 01:09:35.348527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.348835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.348862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.036 qpair failed and we were unable to recover it. 00:22:23.036 [2024-05-15 01:09:35.349098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.349311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.349340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.036 qpair failed and we were unable to recover it. 00:22:23.036 [2024-05-15 01:09:35.349595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.349791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.349817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.036 qpair failed and we were unable to recover it. 00:22:23.036 [2024-05-15 01:09:35.350037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.350246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.350276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.036 qpair failed and we were unable to recover it. 00:22:23.036 [2024-05-15 01:09:35.350451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.350679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.350707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.036 qpair failed and we were unable to recover it. 00:22:23.036 [2024-05-15 01:09:35.350953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.351360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.351414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.036 qpair failed and we were unable to recover it. 00:22:23.036 [2024-05-15 01:09:35.351690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.351938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.351981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.036 qpair failed and we were unable to recover it. 00:22:23.036 [2024-05-15 01:09:35.352189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.352368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.352395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.036 qpair failed and we were unable to recover it. 00:22:23.036 [2024-05-15 01:09:35.352581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.352814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.352842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.036 qpair failed and we were unable to recover it. 00:22:23.036 [2024-05-15 01:09:35.353057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.353374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.353436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.036 qpair failed and we were unable to recover it. 00:22:23.036 [2024-05-15 01:09:35.353629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.353830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.353859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.036 qpair failed and we were unable to recover it. 00:22:23.036 [2024-05-15 01:09:35.354072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.354313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.354341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.036 qpair failed and we were unable to recover it. 00:22:23.036 [2024-05-15 01:09:35.354550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.354750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.354778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.036 qpair failed and we were unable to recover it. 00:22:23.036 [2024-05-15 01:09:35.354992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.355195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.355222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.036 qpair failed and we were unable to recover it. 00:22:23.036 [2024-05-15 01:09:35.355432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.355608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.355640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.036 qpair failed and we were unable to recover it. 00:22:23.036 [2024-05-15 01:09:35.355858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.356050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.036 [2024-05-15 01:09:35.356089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.036 qpair failed and we were unable to recover it. 00:22:23.036 [2024-05-15 01:09:35.356310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.356525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.356554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.037 qpair failed and we were unable to recover it. 00:22:23.037 [2024-05-15 01:09:35.356768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.356985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.357011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.037 qpair failed and we were unable to recover it. 00:22:23.037 [2024-05-15 01:09:35.357201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.357363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.357388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.037 qpair failed and we were unable to recover it. 00:22:23.037 [2024-05-15 01:09:35.357552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.357753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.357780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.037 qpair failed and we were unable to recover it. 00:22:23.037 [2024-05-15 01:09:35.357981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.358192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.358221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.037 qpair failed and we were unable to recover it. 00:22:23.037 [2024-05-15 01:09:35.358440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.358628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.358653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.037 qpair failed and we were unable to recover it. 00:22:23.037 [2024-05-15 01:09:35.358835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.359085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.359114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.037 qpair failed and we were unable to recover it. 00:22:23.037 [2024-05-15 01:09:35.359320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.359559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.359587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.037 qpair failed and we were unable to recover it. 00:22:23.037 [2024-05-15 01:09:35.359812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.360055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.360087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.037 qpair failed and we were unable to recover it. 00:22:23.037 [2024-05-15 01:09:35.360297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.360485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.360510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.037 qpair failed and we were unable to recover it. 00:22:23.037 [2024-05-15 01:09:35.360720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.360920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.360964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.037 qpair failed and we were unable to recover it. 00:22:23.037 [2024-05-15 01:09:35.361138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.361356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.361385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.037 qpair failed and we were unable to recover it. 00:22:23.037 [2024-05-15 01:09:35.361566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.361778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.361803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.037 qpair failed and we were unable to recover it. 00:22:23.037 [2024-05-15 01:09:35.362064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.363043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.363078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.037 qpair failed and we were unable to recover it. 00:22:23.037 [2024-05-15 01:09:35.363278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.363480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.363509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.037 qpair failed and we were unable to recover it. 00:22:23.037 [2024-05-15 01:09:35.363688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.363985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.364014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.037 qpair failed and we were unable to recover it. 00:22:23.037 [2024-05-15 01:09:35.364248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.364445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.364471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.037 qpair failed and we were unable to recover it. 00:22:23.037 [2024-05-15 01:09:35.364664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.364910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.364949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.037 qpair failed and we were unable to recover it. 00:22:23.037 [2024-05-15 01:09:35.365182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.365402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.365435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.037 qpair failed and we were unable to recover it. 00:22:23.037 [2024-05-15 01:09:35.365688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.365942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.365985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.037 qpair failed and we were unable to recover it. 00:22:23.037 [2024-05-15 01:09:35.366178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.366366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.366390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.037 qpair failed and we were unable to recover it. 00:22:23.037 [2024-05-15 01:09:35.366624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.366805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.366833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.037 qpair failed and we were unable to recover it. 00:22:23.037 [2024-05-15 01:09:35.367051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.367246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.367274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.037 qpair failed and we were unable to recover it. 00:22:23.037 [2024-05-15 01:09:35.367490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.367687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.367715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.037 qpair failed and we were unable to recover it. 00:22:23.037 [2024-05-15 01:09:35.367901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.368175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.368226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.037 qpair failed and we were unable to recover it. 00:22:23.037 [2024-05-15 01:09:35.369194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.369529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.369582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.037 qpair failed and we were unable to recover it. 00:22:23.037 [2024-05-15 01:09:35.369800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.370022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.370048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.037 qpair failed and we were unable to recover it. 00:22:23.037 [2024-05-15 01:09:35.370266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.370501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.370529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.037 qpair failed and we were unable to recover it. 00:22:23.037 [2024-05-15 01:09:35.370874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.371141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.371172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.037 qpair failed and we were unable to recover it. 00:22:23.037 [2024-05-15 01:09:35.371344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.371550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.037 [2024-05-15 01:09:35.371577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.037 qpair failed and we were unable to recover it. 00:22:23.038 [2024-05-15 01:09:35.371788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.371983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.372010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.038 qpair failed and we were unable to recover it. 00:22:23.038 [2024-05-15 01:09:35.372176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.372360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.372385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.038 qpair failed and we were unable to recover it. 00:22:23.038 [2024-05-15 01:09:35.372566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.372739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.372766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.038 qpair failed and we were unable to recover it. 00:22:23.038 [2024-05-15 01:09:35.372953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.373128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.373155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.038 qpair failed and we were unable to recover it. 00:22:23.038 [2024-05-15 01:09:35.373419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.373592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.373621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.038 qpair failed and we were unable to recover it. 00:22:23.038 [2024-05-15 01:09:35.373831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.374048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.374074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.038 qpair failed and we were unable to recover it. 00:22:23.038 [2024-05-15 01:09:35.374353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.374582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.374610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.038 qpair failed and we were unable to recover it. 00:22:23.038 [2024-05-15 01:09:35.374812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.375054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.375080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.038 qpair failed and we were unable to recover it. 00:22:23.038 [2024-05-15 01:09:35.375268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.375452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.375480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.038 qpair failed and we were unable to recover it. 00:22:23.038 [2024-05-15 01:09:35.375723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.375972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.376005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.038 qpair failed and we were unable to recover it. 00:22:23.038 [2024-05-15 01:09:35.376197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.376436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.376464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.038 qpair failed and we were unable to recover it. 00:22:23.038 [2024-05-15 01:09:35.376717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.376916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.376952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.038 qpair failed and we were unable to recover it. 00:22:23.038 [2024-05-15 01:09:35.377139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.377310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.377335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.038 qpair failed and we were unable to recover it. 00:22:23.038 [2024-05-15 01:09:35.377523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.377747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.377775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.038 qpair failed and we were unable to recover it. 00:22:23.038 [2024-05-15 01:09:35.378004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.378168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.378192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.038 qpair failed and we were unable to recover it. 00:22:23.038 [2024-05-15 01:09:35.378441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.378648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.378675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.038 qpair failed and we were unable to recover it. 00:22:23.038 [2024-05-15 01:09:35.378874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.379093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.379118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.038 qpair failed and we were unable to recover it. 00:22:23.038 [2024-05-15 01:09:35.379344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.379523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.379552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.038 qpair failed and we were unable to recover it. 00:22:23.038 [2024-05-15 01:09:35.379737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.379947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.379990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.038 qpair failed and we were unable to recover it. 00:22:23.038 [2024-05-15 01:09:35.380189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.380379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.380407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.038 qpair failed and we were unable to recover it. 00:22:23.038 [2024-05-15 01:09:35.380590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.380762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.380789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.038 qpair failed and we were unable to recover it. 00:22:23.038 [2024-05-15 01:09:35.380988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.381179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.381204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.038 qpair failed and we were unable to recover it. 00:22:23.038 [2024-05-15 01:09:35.381427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.381628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.381654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.038 qpair failed and we were unable to recover it. 00:22:23.038 [2024-05-15 01:09:35.381887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.382105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.382130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.038 qpair failed and we were unable to recover it. 00:22:23.038 [2024-05-15 01:09:35.382320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.382538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.382566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.038 qpair failed and we were unable to recover it. 00:22:23.038 [2024-05-15 01:09:35.382744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.382963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.383005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.038 qpair failed and we were unable to recover it. 00:22:23.038 [2024-05-15 01:09:35.383168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.383395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.383424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.038 qpair failed and we were unable to recover it. 00:22:23.038 [2024-05-15 01:09:35.383633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.383807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.383835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.038 qpair failed and we were unable to recover it. 00:22:23.038 [2024-05-15 01:09:35.384022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.384184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.384209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.038 qpair failed and we were unable to recover it. 00:22:23.038 [2024-05-15 01:09:35.384368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.384579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.038 [2024-05-15 01:09:35.384606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.039 qpair failed and we were unable to recover it. 00:22:23.039 [2024-05-15 01:09:35.384808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.385035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.385060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.039 qpair failed and we were unable to recover it. 00:22:23.039 [2024-05-15 01:09:35.385253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.385408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.385433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.039 qpair failed and we were unable to recover it. 00:22:23.039 [2024-05-15 01:09:35.385622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.385811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.385839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.039 qpair failed and we were unable to recover it. 00:22:23.039 [2024-05-15 01:09:35.386063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.386276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.386303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.039 qpair failed and we were unable to recover it. 00:22:23.039 [2024-05-15 01:09:35.386498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.386750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.386776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.039 qpair failed and we were unable to recover it. 00:22:23.039 [2024-05-15 01:09:35.386986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.387205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.387229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.039 qpair failed and we were unable to recover it. 00:22:23.039 [2024-05-15 01:09:35.387421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.387633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.387673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.039 qpair failed and we were unable to recover it. 00:22:23.039 [2024-05-15 01:09:35.387880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.388051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.388077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.039 qpair failed and we were unable to recover it. 00:22:23.039 [2024-05-15 01:09:35.388295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.388539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.388565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.039 qpair failed and we were unable to recover it. 00:22:23.039 [2024-05-15 01:09:35.388802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.388998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.389025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.039 qpair failed and we were unable to recover it. 00:22:23.039 [2024-05-15 01:09:35.389251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.389448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.389474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.039 qpair failed and we were unable to recover it. 00:22:23.039 [2024-05-15 01:09:35.389670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.389861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.389886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.039 qpair failed and we were unable to recover it. 00:22:23.039 [2024-05-15 01:09:35.390110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.390328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.390353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.039 qpair failed and we were unable to recover it. 00:22:23.039 [2024-05-15 01:09:35.390574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.390774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.390799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.039 qpair failed and we were unable to recover it. 00:22:23.039 [2024-05-15 01:09:35.391049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.391264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.391290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.039 qpair failed and we were unable to recover it. 00:22:23.039 [2024-05-15 01:09:35.391484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.391658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.391683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.039 qpair failed and we were unable to recover it. 00:22:23.039 [2024-05-15 01:09:35.391876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.392124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.392149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.039 qpair failed and we were unable to recover it. 00:22:23.039 [2024-05-15 01:09:35.392310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.392503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.392544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.039 qpair failed and we were unable to recover it. 00:22:23.039 [2024-05-15 01:09:35.392722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.392920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.039 [2024-05-15 01:09:35.392955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.039 qpair failed and we were unable to recover it. 00:22:23.039 [2024-05-15 01:09:35.393184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.311 [2024-05-15 01:09:35.393385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.311 [2024-05-15 01:09:35.393410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.311 qpair failed and we were unable to recover it. 00:22:23.311 [2024-05-15 01:09:35.393577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.311 [2024-05-15 01:09:35.393792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.311 [2024-05-15 01:09:35.393817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.311 qpair failed and we were unable to recover it. 00:22:23.311 [2024-05-15 01:09:35.394030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.311 [2024-05-15 01:09:35.394195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.311 [2024-05-15 01:09:35.394220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.311 qpair failed and we were unable to recover it. 00:22:23.311 [2024-05-15 01:09:35.394426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.311 [2024-05-15 01:09:35.394588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.311 [2024-05-15 01:09:35.394615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.311 qpair failed and we were unable to recover it. 00:22:23.311 [2024-05-15 01:09:35.394806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.311 [2024-05-15 01:09:35.394992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.311 [2024-05-15 01:09:35.395018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.311 qpair failed and we were unable to recover it. 00:22:23.311 [2024-05-15 01:09:35.395234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.311 [2024-05-15 01:09:35.395399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.311 [2024-05-15 01:09:35.395423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.311 qpair failed and we were unable to recover it. 00:22:23.311 [2024-05-15 01:09:35.395613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.311 [2024-05-15 01:09:35.395777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.311 [2024-05-15 01:09:35.395802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.311 qpair failed and we were unable to recover it. 00:22:23.312 [2024-05-15 01:09:35.395993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.396175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.396200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.312 qpair failed and we were unable to recover it. 00:22:23.312 [2024-05-15 01:09:35.396358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.396570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.396595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.312 qpair failed and we were unable to recover it. 00:22:23.312 [2024-05-15 01:09:35.396780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.396967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.396994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.312 qpair failed and we were unable to recover it. 00:22:23.312 [2024-05-15 01:09:35.397169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.397384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.397407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.312 qpair failed and we were unable to recover it. 00:22:23.312 [2024-05-15 01:09:35.397643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.397831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.397861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.312 qpair failed and we were unable to recover it. 00:22:23.312 [2024-05-15 01:09:35.398063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.398257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.398282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.312 qpair failed and we were unable to recover it. 00:22:23.312 [2024-05-15 01:09:35.398455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.398640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.398666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.312 qpair failed and we were unable to recover it. 00:22:23.312 [2024-05-15 01:09:35.398845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.399030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.399056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.312 qpair failed and we were unable to recover it. 00:22:23.312 [2024-05-15 01:09:35.399248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.399407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.399433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.312 qpair failed and we were unable to recover it. 00:22:23.312 [2024-05-15 01:09:35.399596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.399784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.399810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.312 qpair failed and we were unable to recover it. 00:22:23.312 [2024-05-15 01:09:35.399980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.400186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.400211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.312 qpair failed and we were unable to recover it. 00:22:23.312 [2024-05-15 01:09:35.400375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.400557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.400583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.312 qpair failed and we were unable to recover it. 00:22:23.312 [2024-05-15 01:09:35.400745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.400941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.400966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.312 qpair failed and we were unable to recover it. 00:22:23.312 [2024-05-15 01:09:35.401141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.401315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.401339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.312 qpair failed and we were unable to recover it. 00:22:23.312 [2024-05-15 01:09:35.401499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.401659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.401684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.312 qpair failed and we were unable to recover it. 00:22:23.312 [2024-05-15 01:09:35.401847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.402022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.402049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.312 qpair failed and we were unable to recover it. 00:22:23.312 [2024-05-15 01:09:35.402219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.402423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.402447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.312 qpair failed and we were unable to recover it. 00:22:23.312 [2024-05-15 01:09:35.402643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.402828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.402853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.312 qpair failed and we were unable to recover it. 00:22:23.312 [2024-05-15 01:09:35.403050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.403247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.403272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.312 qpair failed and we were unable to recover it. 00:22:23.312 [2024-05-15 01:09:35.403432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.403603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.403628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.312 qpair failed and we were unable to recover it. 00:22:23.312 [2024-05-15 01:09:35.403786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.404001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.404027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.312 qpair failed and we were unable to recover it. 00:22:23.312 [2024-05-15 01:09:35.404186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.404381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.404406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.312 qpair failed and we were unable to recover it. 00:22:23.312 [2024-05-15 01:09:35.404620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.404838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.404862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.312 qpair failed and we were unable to recover it. 00:22:23.312 [2024-05-15 01:09:35.405046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.405208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.405232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.312 qpair failed and we were unable to recover it. 00:22:23.312 [2024-05-15 01:09:35.405424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.405614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.405641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.312 qpair failed and we were unable to recover it. 00:22:23.312 [2024-05-15 01:09:35.405856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.406040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.406066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.312 qpair failed and we were unable to recover it. 00:22:23.312 [2024-05-15 01:09:35.406256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.406449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.406474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.312 qpair failed and we were unable to recover it. 00:22:23.312 [2024-05-15 01:09:35.406676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.406865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.406890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.312 qpair failed and we were unable to recover it. 00:22:23.312 [2024-05-15 01:09:35.407104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.407328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.407353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.312 qpair failed and we were unable to recover it. 00:22:23.312 [2024-05-15 01:09:35.407552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.312 [2024-05-15 01:09:35.407769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.407793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.313 qpair failed and we were unable to recover it. 00:22:23.313 [2024-05-15 01:09:35.407957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.408109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.408135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.313 qpair failed and we were unable to recover it. 00:22:23.313 [2024-05-15 01:09:35.408346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.408540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.408565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.313 qpair failed and we were unable to recover it. 00:22:23.313 [2024-05-15 01:09:35.408756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.408920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.408949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.313 qpair failed and we were unable to recover it. 00:22:23.313 [2024-05-15 01:09:35.409140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.409299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.409324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.313 qpair failed and we were unable to recover it. 00:22:23.313 [2024-05-15 01:09:35.409493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.409656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.409681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.313 qpair failed and we were unable to recover it. 00:22:23.313 [2024-05-15 01:09:35.409847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.410027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.410053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.313 qpair failed and we were unable to recover it. 00:22:23.313 [2024-05-15 01:09:35.410254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.410440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.410465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.313 qpair failed and we were unable to recover it. 00:22:23.313 [2024-05-15 01:09:35.410643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.410814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.410838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.313 qpair failed and we were unable to recover it. 00:22:23.313 [2024-05-15 01:09:35.411036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.411203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.411228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.313 qpair failed and we were unable to recover it. 00:22:23.313 [2024-05-15 01:09:35.411434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.411600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.411625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.313 qpair failed and we were unable to recover it. 00:22:23.313 [2024-05-15 01:09:35.411852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.412092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.412118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.313 qpair failed and we were unable to recover it. 00:22:23.313 [2024-05-15 01:09:35.412309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.412496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.412521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.313 qpair failed and we were unable to recover it. 00:22:23.313 [2024-05-15 01:09:35.412742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.412945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.412971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.313 qpair failed and we were unable to recover it. 00:22:23.313 [2024-05-15 01:09:35.413169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.413379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.413407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.313 qpair failed and we were unable to recover it. 00:22:23.313 [2024-05-15 01:09:35.413613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.413831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.413857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.313 qpair failed and we were unable to recover it. 00:22:23.313 [2024-05-15 01:09:35.414042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.414212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.414254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.313 qpair failed and we were unable to recover it. 00:22:23.313 [2024-05-15 01:09:35.414443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.414656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.414683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.313 qpair failed and we were unable to recover it. 00:22:23.313 [2024-05-15 01:09:35.414892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.415091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.415116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.313 qpair failed and we were unable to recover it. 00:22:23.313 [2024-05-15 01:09:35.415279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.415427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.415452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.313 qpair failed and we were unable to recover it. 00:22:23.313 [2024-05-15 01:09:35.415649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.415833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.415858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.313 qpair failed and we were unable to recover it. 00:22:23.313 [2024-05-15 01:09:35.416048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.416210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.416263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.313 qpair failed and we were unable to recover it. 00:22:23.313 [2024-05-15 01:09:35.416469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.416683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.416711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.313 qpair failed and we were unable to recover it. 00:22:23.313 [2024-05-15 01:09:35.416886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.417080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.417106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.313 qpair failed and we were unable to recover it. 00:22:23.313 [2024-05-15 01:09:35.417274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.417493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.417522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.313 qpair failed and we were unable to recover it. 00:22:23.313 [2024-05-15 01:09:35.417735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.417984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.418010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.313 qpair failed and we were unable to recover it. 00:22:23.313 [2024-05-15 01:09:35.418193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.418364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.418409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.313 qpair failed and we were unable to recover it. 00:22:23.313 [2024-05-15 01:09:35.418627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.418842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.418871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.313 qpair failed and we were unable to recover it. 00:22:23.313 [2024-05-15 01:09:35.419094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.419262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.419286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.313 qpair failed and we were unable to recover it. 00:22:23.313 [2024-05-15 01:09:35.419445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.419607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.313 [2024-05-15 01:09:35.419632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.313 qpair failed and we were unable to recover it. 00:22:23.313 [2024-05-15 01:09:35.419893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.420084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.420110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.314 qpair failed and we were unable to recover it. 00:22:23.314 [2024-05-15 01:09:35.420278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.420504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.420531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.314 qpair failed and we were unable to recover it. 00:22:23.314 [2024-05-15 01:09:35.420736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.420926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.420961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.314 qpair failed and we were unable to recover it. 00:22:23.314 [2024-05-15 01:09:35.421146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.421339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.421367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.314 qpair failed and we were unable to recover it. 00:22:23.314 [2024-05-15 01:09:35.421593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.421813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.421841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.314 qpair failed and we were unable to recover it. 00:22:23.314 [2024-05-15 01:09:35.422113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.422311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.422335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.314 qpair failed and we were unable to recover it. 00:22:23.314 [2024-05-15 01:09:35.422506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.422662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.422705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.314 qpair failed and we were unable to recover it. 00:22:23.314 [2024-05-15 01:09:35.422962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.423122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.423148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.314 qpair failed and we were unable to recover it. 00:22:23.314 [2024-05-15 01:09:35.423347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.423512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.423539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.314 qpair failed and we were unable to recover it. 00:22:23.314 [2024-05-15 01:09:35.423710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.423936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.423962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.314 qpair failed and we were unable to recover it. 00:22:23.314 [2024-05-15 01:09:35.424134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.424350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.424378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.314 qpair failed and we were unable to recover it. 00:22:23.314 [2024-05-15 01:09:35.424597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.424811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.424835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.314 qpair failed and we were unable to recover it. 00:22:23.314 [2024-05-15 01:09:35.425051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.425277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.425304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.314 qpair failed and we were unable to recover it. 00:22:23.314 [2024-05-15 01:09:35.425540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.425731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.425758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.314 qpair failed and we were unable to recover it. 00:22:23.314 [2024-05-15 01:09:35.425988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.426178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.426206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.314 qpair failed and we were unable to recover it. 00:22:23.314 [2024-05-15 01:09:35.426367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.426519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.426544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.314 qpair failed and we were unable to recover it. 00:22:23.314 [2024-05-15 01:09:35.426708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.426904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.426936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.314 qpair failed and we were unable to recover it. 00:22:23.314 [2024-05-15 01:09:35.427119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.427283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.427308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.314 qpair failed and we were unable to recover it. 00:22:23.314 [2024-05-15 01:09:35.427522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.427749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.427777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.314 qpair failed and we were unable to recover it. 00:22:23.314 [2024-05-15 01:09:35.427985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.428140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.428165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.314 qpair failed and we were unable to recover it. 00:22:23.314 [2024-05-15 01:09:35.428330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.428499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.428524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.314 qpair failed and we were unable to recover it. 00:22:23.314 [2024-05-15 01:09:35.428696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.428919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.428957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.314 qpair failed and we were unable to recover it. 00:22:23.314 [2024-05-15 01:09:35.429125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.429285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.429310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.314 qpair failed and we were unable to recover it. 00:22:23.314 [2024-05-15 01:09:35.429485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.429686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.429710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.314 qpair failed and we were unable to recover it. 00:22:23.314 [2024-05-15 01:09:35.429872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.430073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.430102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.314 qpair failed and we were unable to recover it. 00:22:23.314 [2024-05-15 01:09:35.430294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.430460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.430485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.314 qpair failed and we were unable to recover it. 00:22:23.314 [2024-05-15 01:09:35.430645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.430833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.430858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.314 qpair failed and we were unable to recover it. 00:22:23.314 [2024-05-15 01:09:35.431045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.431212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.431238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.314 qpair failed and we were unable to recover it. 00:22:23.314 [2024-05-15 01:09:35.431402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.431689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.431717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.314 qpair failed and we were unable to recover it. 00:22:23.314 [2024-05-15 01:09:35.431973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.432138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.314 [2024-05-15 01:09:35.432164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.315 qpair failed and we were unable to recover it. 00:22:23.315 [2024-05-15 01:09:35.432337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.432556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.432585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.315 qpair failed and we were unable to recover it. 00:22:23.315 [2024-05-15 01:09:35.432830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.433022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.433048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.315 qpair failed and we were unable to recover it. 00:22:23.315 [2024-05-15 01:09:35.433215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.433440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.433467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.315 qpair failed and we were unable to recover it. 00:22:23.315 [2024-05-15 01:09:35.433664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.433846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.433873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.315 qpair failed and we were unable to recover it. 00:22:23.315 [2024-05-15 01:09:35.434088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.434273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.434305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.315 qpair failed and we were unable to recover it. 00:22:23.315 [2024-05-15 01:09:35.434523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.434765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.434793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.315 qpair failed and we were unable to recover it. 00:22:23.315 [2024-05-15 01:09:35.435024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.435193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.435231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.315 qpair failed and we were unable to recover it. 00:22:23.315 [2024-05-15 01:09:35.435510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.435787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.435815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.315 qpair failed and we were unable to recover it. 00:22:23.315 [2024-05-15 01:09:35.436036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.436223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.436251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.315 qpair failed and we were unable to recover it. 00:22:23.315 [2024-05-15 01:09:35.436462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.436646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.436674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.315 qpair failed and we were unable to recover it. 00:22:23.315 [2024-05-15 01:09:35.436880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.437079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.437105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.315 qpair failed and we were unable to recover it. 00:22:23.315 [2024-05-15 01:09:35.438222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.438510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.438558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.315 qpair failed and we were unable to recover it. 00:22:23.315 [2024-05-15 01:09:35.438777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.438983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.439010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.315 qpair failed and we were unable to recover it. 00:22:23.315 [2024-05-15 01:09:35.439173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.439411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.439441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.315 qpair failed and we were unable to recover it. 00:22:23.315 [2024-05-15 01:09:35.439654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.439838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.439872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.315 qpair failed and we were unable to recover it. 00:22:23.315 [2024-05-15 01:09:35.440083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.440245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.440269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.315 qpair failed and we were unable to recover it. 00:22:23.315 [2024-05-15 01:09:35.440459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.440670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.440698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.315 qpair failed and we were unable to recover it. 00:22:23.315 [2024-05-15 01:09:35.440888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.441094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.441119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.315 qpair failed and we were unable to recover it. 00:22:23.315 [2024-05-15 01:09:35.441299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.441536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.441564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.315 qpair failed and we were unable to recover it. 00:22:23.315 [2024-05-15 01:09:35.441800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.442037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.442063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.315 qpair failed and we were unable to recover it. 00:22:23.315 [2024-05-15 01:09:35.442227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.442460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.442487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.315 qpair failed and we were unable to recover it. 00:22:23.315 [2024-05-15 01:09:35.442702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.442906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.442946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.315 qpair failed and we were unable to recover it. 00:22:23.315 [2024-05-15 01:09:35.443137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.443352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.443379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.315 qpair failed and we were unable to recover it. 00:22:23.315 [2024-05-15 01:09:35.443588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.443824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.443850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.315 qpair failed and we were unable to recover it. 00:22:23.315 [2024-05-15 01:09:35.444065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.444230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.444255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.315 qpair failed and we were unable to recover it. 00:22:23.315 [2024-05-15 01:09:35.444453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.444651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.444675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.315 qpair failed and we were unable to recover it. 00:22:23.315 [2024-05-15 01:09:35.444841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.445038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.445064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.315 qpair failed and we were unable to recover it. 00:22:23.315 [2024-05-15 01:09:35.445226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.445443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.445467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.315 qpair failed and we were unable to recover it. 00:22:23.315 [2024-05-15 01:09:35.445637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.445829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.315 [2024-05-15 01:09:35.445855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.315 qpair failed and we were unable to recover it. 00:22:23.316 [2024-05-15 01:09:35.446055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.446241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.446268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.316 qpair failed and we were unable to recover it. 00:22:23.316 [2024-05-15 01:09:35.446463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.447261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.447294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.316 qpair failed and we were unable to recover it. 00:22:23.316 [2024-05-15 01:09:35.447513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.447699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.447727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.316 qpair failed and we were unable to recover it. 00:22:23.316 [2024-05-15 01:09:35.447953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.448107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.448132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.316 qpair failed and we were unable to recover it. 00:22:23.316 [2024-05-15 01:09:35.448322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.448526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.448553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.316 qpair failed and we were unable to recover it. 00:22:23.316 [2024-05-15 01:09:35.448730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.448917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.448961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.316 qpair failed and we were unable to recover it. 00:22:23.316 [2024-05-15 01:09:35.449135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.449337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.449362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.316 qpair failed and we were unable to recover it. 00:22:23.316 [2024-05-15 01:09:35.449593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.449784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.449813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.316 qpair failed and we were unable to recover it. 00:22:23.316 [2024-05-15 01:09:35.450008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.450179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.450219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.316 qpair failed and we were unable to recover it. 00:22:23.316 [2024-05-15 01:09:35.450408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.450623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.450656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.316 qpair failed and we were unable to recover it. 00:22:23.316 [2024-05-15 01:09:35.450832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.451059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.451086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.316 qpair failed and we were unable to recover it. 00:22:23.316 [2024-05-15 01:09:35.451276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.451475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.451502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.316 qpair failed and we were unable to recover it. 00:22:23.316 [2024-05-15 01:09:35.451709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.451942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.451986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.316 qpair failed and we were unable to recover it. 00:22:23.316 [2024-05-15 01:09:35.452734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.452991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.453018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.316 qpair failed and we were unable to recover it. 00:22:23.316 [2024-05-15 01:09:35.453192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.453420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.453449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.316 qpair failed and we were unable to recover it. 00:22:23.316 [2024-05-15 01:09:35.453666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.453911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.453953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.316 qpair failed and we were unable to recover it. 00:22:23.316 [2024-05-15 01:09:35.454148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.454324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.454351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.316 qpair failed and we were unable to recover it. 00:22:23.316 [2024-05-15 01:09:35.454555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.454770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.454796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.316 qpair failed and we were unable to recover it. 00:22:23.316 [2024-05-15 01:09:35.455041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.455203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.455245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.316 qpair failed and we were unable to recover it. 00:22:23.316 [2024-05-15 01:09:35.455471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.455685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.455713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.316 qpair failed and we were unable to recover it. 00:22:23.316 [2024-05-15 01:09:35.455923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.456115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.456140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.316 qpair failed and we were unable to recover it. 00:22:23.316 [2024-05-15 01:09:35.456370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.456529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.456554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.316 qpair failed and we were unable to recover it. 00:22:23.316 [2024-05-15 01:09:35.456768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.456986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.316 [2024-05-15 01:09:35.457012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.316 qpair failed and we were unable to recover it. 00:22:23.316 [2024-05-15 01:09:35.457172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.457368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.457396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.317 qpair failed and we were unable to recover it. 00:22:23.317 [2024-05-15 01:09:35.457623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.457864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.457889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.317 qpair failed and we were unable to recover it. 00:22:23.317 [2024-05-15 01:09:35.458048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.458219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.458243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.317 qpair failed and we were unable to recover it. 00:22:23.317 [2024-05-15 01:09:35.458454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.458660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.458687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.317 qpair failed and we were unable to recover it. 00:22:23.317 [2024-05-15 01:09:35.458892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.459093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.459121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.317 qpair failed and we were unable to recover it. 00:22:23.317 [2024-05-15 01:09:35.459324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.459534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.459562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.317 qpair failed and we were unable to recover it. 00:22:23.317 [2024-05-15 01:09:35.459744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.459956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.459982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.317 qpair failed and we were unable to recover it. 00:22:23.317 [2024-05-15 01:09:35.460143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.460318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.460345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.317 qpair failed and we were unable to recover it. 00:22:23.317 [2024-05-15 01:09:35.460614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.460835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.460863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.317 qpair failed and we were unable to recover it. 00:22:23.317 [2024-05-15 01:09:35.461055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.461252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.461279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.317 qpair failed and we were unable to recover it. 00:22:23.317 [2024-05-15 01:09:35.461468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.461658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.461683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.317 qpair failed and we were unable to recover it. 00:22:23.317 [2024-05-15 01:09:35.461847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.462041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.462067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.317 qpair failed and we were unable to recover it. 00:22:23.317 [2024-05-15 01:09:35.462234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.462449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.462478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.317 qpair failed and we were unable to recover it. 00:22:23.317 [2024-05-15 01:09:35.462678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.462984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.463010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.317 qpair failed and we were unable to recover it. 00:22:23.317 [2024-05-15 01:09:35.463863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.464099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.464126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.317 qpair failed and we were unable to recover it. 00:22:23.317 [2024-05-15 01:09:35.464359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.464567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.464595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.317 qpair failed and we were unable to recover it. 00:22:23.317 [2024-05-15 01:09:35.464804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.464999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.465025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.317 qpair failed and we were unable to recover it. 00:22:23.317 [2024-05-15 01:09:35.465761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.466009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.466036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.317 qpair failed and we were unable to recover it. 00:22:23.317 [2024-05-15 01:09:35.466201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.466465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.466492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.317 qpair failed and we were unable to recover it. 00:22:23.317 [2024-05-15 01:09:35.466664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.466909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.466944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.317 qpair failed and we were unable to recover it. 00:22:23.317 [2024-05-15 01:09:35.467156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.467391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.467415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.317 qpair failed and we were unable to recover it. 00:22:23.317 [2024-05-15 01:09:35.467609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.467796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.467823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.317 qpair failed and we were unable to recover it. 00:22:23.317 [2024-05-15 01:09:35.468034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.468204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.468257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.317 qpair failed and we were unable to recover it. 00:22:23.317 [2024-05-15 01:09:35.468467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.468645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.468673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.317 qpair failed and we were unable to recover it. 00:22:23.317 [2024-05-15 01:09:35.468882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.469101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.469127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.317 qpair failed and we were unable to recover it. 00:22:23.317 [2024-05-15 01:09:35.469286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.469516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.469543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.317 qpair failed and we were unable to recover it. 00:22:23.317 [2024-05-15 01:09:35.469833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.470072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.470098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.317 qpair failed and we were unable to recover it. 00:22:23.317 [2024-05-15 01:09:35.470272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.470434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.470459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.317 qpair failed and we were unable to recover it. 00:22:23.317 [2024-05-15 01:09:35.470668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.470859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.470886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.317 qpair failed and we were unable to recover it. 00:22:23.317 [2024-05-15 01:09:35.471085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.471250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.317 [2024-05-15 01:09:35.471274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.317 qpair failed and we were unable to recover it. 00:22:23.318 [2024-05-15 01:09:35.471549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.471763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.471792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.318 qpair failed and we were unable to recover it. 00:22:23.318 [2024-05-15 01:09:35.472015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.472205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.472230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.318 qpair failed and we were unable to recover it. 00:22:23.318 [2024-05-15 01:09:35.472452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.472632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.472660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.318 qpair failed and we were unable to recover it. 00:22:23.318 [2024-05-15 01:09:35.472919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.473156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.473185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.318 qpair failed and we were unable to recover it. 00:22:23.318 [2024-05-15 01:09:35.473388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.473610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.473654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.318 qpair failed and we were unable to recover it. 00:22:23.318 [2024-05-15 01:09:35.473890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.474080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.474108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.318 qpair failed and we were unable to recover it. 00:22:23.318 [2024-05-15 01:09:35.474315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.474585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.474627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.318 qpair failed and we were unable to recover it. 00:22:23.318 [2024-05-15 01:09:35.474856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.475039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.475065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.318 qpair failed and we were unable to recover it. 00:22:23.318 [2024-05-15 01:09:35.475251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.475530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.475556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.318 qpair failed and we were unable to recover it. 00:22:23.318 [2024-05-15 01:09:35.475771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.475961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.475989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.318 qpair failed and we were unable to recover it. 00:22:23.318 [2024-05-15 01:09:35.476156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.476342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.476368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.318 qpair failed and we were unable to recover it. 00:22:23.318 [2024-05-15 01:09:35.476665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.476888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.476920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.318 qpair failed and we were unable to recover it. 00:22:23.318 [2024-05-15 01:09:35.477121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.477316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.477343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.318 qpair failed and we were unable to recover it. 00:22:23.318 [2024-05-15 01:09:35.477556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.477822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.477847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.318 qpair failed and we were unable to recover it. 00:22:23.318 [2024-05-15 01:09:35.478054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.478286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.478315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.318 qpair failed and we were unable to recover it. 00:22:23.318 [2024-05-15 01:09:35.478579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.478818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.478843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.318 qpair failed and we were unable to recover it. 00:22:23.318 [2024-05-15 01:09:35.479055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.479214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.479248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.318 qpair failed and we were unable to recover it. 00:22:23.318 [2024-05-15 01:09:35.479455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.479804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.479853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.318 qpair failed and we were unable to recover it. 00:22:23.318 [2024-05-15 01:09:35.480071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.480943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.480974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.318 qpair failed and we were unable to recover it. 00:22:23.318 [2024-05-15 01:09:35.481180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.481824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.481853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.318 qpair failed and we were unable to recover it. 00:22:23.318 [2024-05-15 01:09:35.482064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.482250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.482279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.318 qpair failed and we were unable to recover it. 00:22:23.318 [2024-05-15 01:09:35.482490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.482735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.482760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.318 qpair failed and we were unable to recover it. 00:22:23.318 [2024-05-15 01:09:35.482956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.483124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.483150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.318 qpair failed and we were unable to recover it. 00:22:23.318 [2024-05-15 01:09:35.483378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.483630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.483674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.318 qpair failed and we were unable to recover it. 00:22:23.318 [2024-05-15 01:09:35.483866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.484065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.484092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.318 qpair failed and we were unable to recover it. 00:22:23.318 [2024-05-15 01:09:35.484325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.484604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.484646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.318 qpair failed and we were unable to recover it. 00:22:23.318 [2024-05-15 01:09:35.484810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.485003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.485029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.318 qpair failed and we were unable to recover it. 00:22:23.318 [2024-05-15 01:09:35.485199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.485400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.485428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.318 qpair failed and we were unable to recover it. 00:22:23.318 [2024-05-15 01:09:35.485668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.485873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.485899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.318 qpair failed and we were unable to recover it. 00:22:23.318 [2024-05-15 01:09:35.486096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.318 [2024-05-15 01:09:35.486263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.486288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.319 qpair failed and we were unable to recover it. 00:22:23.319 [2024-05-15 01:09:35.486455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.486648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.486694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.319 qpair failed and we were unable to recover it. 00:22:23.319 [2024-05-15 01:09:35.486853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.487028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.487053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.319 qpair failed and we were unable to recover it. 00:22:23.319 [2024-05-15 01:09:35.487234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.487498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.487542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.319 qpair failed and we were unable to recover it. 00:22:23.319 [2024-05-15 01:09:35.487732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.487903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.487935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.319 qpair failed and we were unable to recover it. 00:22:23.319 [2024-05-15 01:09:35.488133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.488371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.488414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.319 qpair failed and we were unable to recover it. 00:22:23.319 [2024-05-15 01:09:35.488625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.488810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.488834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.319 qpair failed and we were unable to recover it. 00:22:23.319 [2024-05-15 01:09:35.489000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.489193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.489220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.319 qpair failed and we were unable to recover it. 00:22:23.319 [2024-05-15 01:09:35.489430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.489626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.489670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.319 qpair failed and we were unable to recover it. 00:22:23.319 [2024-05-15 01:09:35.489863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.490049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.490075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.319 qpair failed and we were unable to recover it. 00:22:23.319 [2024-05-15 01:09:35.490264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.490505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.490548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.319 qpair failed and we were unable to recover it. 00:22:23.319 [2024-05-15 01:09:35.490779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.490970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.490997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.319 qpair failed and we were unable to recover it. 00:22:23.319 [2024-05-15 01:09:35.491161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.491381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.491410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.319 qpair failed and we were unable to recover it. 00:22:23.319 [2024-05-15 01:09:35.491597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.491758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.491785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.319 qpair failed and we were unable to recover it. 00:22:23.319 [2024-05-15 01:09:35.492006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.492193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.492219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.319 qpair failed and we were unable to recover it. 00:22:23.319 [2024-05-15 01:09:35.492402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.492635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.492680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.319 qpair failed and we were unable to recover it. 00:22:23.319 [2024-05-15 01:09:35.492907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.493108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.493133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.319 qpair failed and we were unable to recover it. 00:22:23.319 [2024-05-15 01:09:35.493328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.493541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.493583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.319 qpair failed and we were unable to recover it. 00:22:23.319 [2024-05-15 01:09:35.493775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.493942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.493968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.319 qpair failed and we were unable to recover it. 00:22:23.319 [2024-05-15 01:09:35.494139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.494322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.494368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.319 qpair failed and we were unable to recover it. 00:22:23.319 [2024-05-15 01:09:35.494552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.494812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.494836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.319 qpair failed and we were unable to recover it. 00:22:23.319 [2024-05-15 01:09:35.495057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.495231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.495258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.319 qpair failed and we were unable to recover it. 00:22:23.319 [2024-05-15 01:09:35.495521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.495834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.495880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.319 qpair failed and we were unable to recover it. 00:22:23.319 [2024-05-15 01:09:35.496085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.496314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.496355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.319 qpair failed and we were unable to recover it. 00:22:23.319 [2024-05-15 01:09:35.496629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.496845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.496874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.319 qpair failed and we were unable to recover it. 00:22:23.319 [2024-05-15 01:09:35.497058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.497240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.497269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.319 qpair failed and we were unable to recover it. 00:22:23.319 [2024-05-15 01:09:35.497494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.497767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.497810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.319 qpair failed and we were unable to recover it. 00:22:23.319 [2024-05-15 01:09:35.498007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.498170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.498197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.319 qpair failed and we were unable to recover it. 00:22:23.319 [2024-05-15 01:09:35.498365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.498528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.498553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.319 qpair failed and we were unable to recover it. 00:22:23.319 [2024-05-15 01:09:35.498768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.498964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.319 [2024-05-15 01:09:35.498992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.319 qpair failed and we were unable to recover it. 00:22:23.319 [2024-05-15 01:09:35.499161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.499363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.499407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.320 qpair failed and we were unable to recover it. 00:22:23.320 [2024-05-15 01:09:35.499630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.499830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.499856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.320 qpair failed and we were unable to recover it. 00:22:23.320 [2024-05-15 01:09:35.500035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.500248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.500292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.320 qpair failed and we were unable to recover it. 00:22:23.320 [2024-05-15 01:09:35.500499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.500767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.500810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.320 qpair failed and we were unable to recover it. 00:22:23.320 [2024-05-15 01:09:35.501002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.501210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.501265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.320 qpair failed and we were unable to recover it. 00:22:23.320 [2024-05-15 01:09:35.501465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.501721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.501764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.320 qpair failed and we were unable to recover it. 00:22:23.320 [2024-05-15 01:09:35.501966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.502159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.502207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.320 qpair failed and we were unable to recover it. 00:22:23.320 [2024-05-15 01:09:35.502453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.502864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.502913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.320 qpair failed and we were unable to recover it. 00:22:23.320 [2024-05-15 01:09:35.503120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.503315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.503357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.320 qpair failed and we were unable to recover it. 00:22:23.320 [2024-05-15 01:09:35.503575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.503767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.503792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.320 qpair failed and we were unable to recover it. 00:22:23.320 [2024-05-15 01:09:35.503953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.504170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.504213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.320 qpair failed and we were unable to recover it. 00:22:23.320 [2024-05-15 01:09:35.504460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.504650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.504675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.320 qpair failed and we were unable to recover it. 00:22:23.320 [2024-05-15 01:09:35.504839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.505057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.505101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.320 qpair failed and we were unable to recover it. 00:22:23.320 [2024-05-15 01:09:35.505355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.505571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.505617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.320 qpair failed and we were unable to recover it. 00:22:23.320 [2024-05-15 01:09:35.505807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.506030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.506080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.320 qpair failed and we were unable to recover it. 00:22:23.320 [2024-05-15 01:09:35.506242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.506501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.506544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.320 qpair failed and we were unable to recover it. 00:22:23.320 [2024-05-15 01:09:35.506719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.506899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.506941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.320 qpair failed and we were unable to recover it. 00:22:23.320 [2024-05-15 01:09:35.507127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.507397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.507438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.320 qpair failed and we were unable to recover it. 00:22:23.320 [2024-05-15 01:09:35.507650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.507880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.507905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.320 qpair failed and we were unable to recover it. 00:22:23.320 [2024-05-15 01:09:35.508140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.508369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.508412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.320 qpair failed and we were unable to recover it. 00:22:23.320 [2024-05-15 01:09:35.508837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.509057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.509082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.320 qpair failed and we were unable to recover it. 00:22:23.320 [2024-05-15 01:09:35.509299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.509527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.509557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.320 qpair failed and we were unable to recover it. 00:22:23.320 [2024-05-15 01:09:35.509891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.510096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.510122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.320 qpair failed and we were unable to recover it. 00:22:23.320 [2024-05-15 01:09:35.510305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.510519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.510562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.320 qpair failed and we were unable to recover it. 00:22:23.320 [2024-05-15 01:09:35.510743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.510940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.320 [2024-05-15 01:09:35.510972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.320 qpair failed and we were unable to recover it. 00:22:23.320 [2024-05-15 01:09:35.511173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.511430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.511472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.321 qpair failed and we were unable to recover it. 00:22:23.321 [2024-05-15 01:09:35.511726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.511950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.511976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.321 qpair failed and we were unable to recover it. 00:22:23.321 [2024-05-15 01:09:35.512146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.512418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.512469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.321 qpair failed and we were unable to recover it. 00:22:23.321 [2024-05-15 01:09:35.512744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.512939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.512965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.321 qpair failed and we were unable to recover it. 00:22:23.321 [2024-05-15 01:09:35.513133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.513384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.513426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.321 qpair failed and we were unable to recover it. 00:22:23.321 [2024-05-15 01:09:35.513827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.514068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.514094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.321 qpair failed and we were unable to recover it. 00:22:23.321 [2024-05-15 01:09:35.514287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.515180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.515210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.321 qpair failed and we were unable to recover it. 00:22:23.321 [2024-05-15 01:09:35.515468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.515649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.515679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.321 qpair failed and we were unable to recover it. 00:22:23.321 [2024-05-15 01:09:35.515895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.516139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.516183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.321 qpair failed and we were unable to recover it. 00:22:23.321 [2024-05-15 01:09:35.516453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.516785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.516818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.321 qpair failed and we were unable to recover it. 00:22:23.321 [2024-05-15 01:09:35.517035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.517261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.517294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.321 qpair failed and we were unable to recover it. 00:22:23.321 [2024-05-15 01:09:35.517507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.517713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.517755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.321 qpair failed and we were unable to recover it. 00:22:23.321 [2024-05-15 01:09:35.517964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.518155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.518199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.321 qpair failed and we were unable to recover it. 00:22:23.321 [2024-05-15 01:09:35.518426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.518636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.518678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.321 qpair failed and we were unable to recover it. 00:22:23.321 [2024-05-15 01:09:35.518882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.519107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.519152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.321 qpair failed and we were unable to recover it. 00:22:23.321 [2024-05-15 01:09:35.519387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.519583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.519627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.321 qpair failed and we were unable to recover it. 00:22:23.321 [2024-05-15 01:09:35.519791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.520024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.520068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.321 qpair failed and we were unable to recover it. 00:22:23.321 [2024-05-15 01:09:35.520259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.520464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.520506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.321 qpair failed and we were unable to recover it. 00:22:23.321 [2024-05-15 01:09:35.520723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.520883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.520908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.321 qpair failed and we were unable to recover it. 00:22:23.321 [2024-05-15 01:09:35.521114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.521359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.521387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.321 qpair failed and we were unable to recover it. 00:22:23.321 [2024-05-15 01:09:35.521627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.521861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.521886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.321 qpair failed and we were unable to recover it. 00:22:23.321 [2024-05-15 01:09:35.522074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.522261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.522303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.321 qpair failed and we were unable to recover it. 00:22:23.321 [2024-05-15 01:09:35.522488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.522778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.522821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.321 qpair failed and we were unable to recover it. 00:22:23.321 [2024-05-15 01:09:35.523012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.523271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.523315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.321 qpair failed and we were unable to recover it. 00:22:23.321 [2024-05-15 01:09:35.523539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.523734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.523759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.321 qpair failed and we were unable to recover it. 00:22:23.321 [2024-05-15 01:09:35.523922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.524154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.524198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.321 qpair failed and we were unable to recover it. 00:22:23.321 [2024-05-15 01:09:35.524395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.524598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.524642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.321 qpair failed and we were unable to recover it. 00:22:23.321 [2024-05-15 01:09:35.524831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.525019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.525045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.321 qpair failed and we were unable to recover it. 00:22:23.321 [2024-05-15 01:09:35.525231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.525505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.525547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.321 qpair failed and we were unable to recover it. 00:22:23.321 [2024-05-15 01:09:35.525761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.525941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.321 [2024-05-15 01:09:35.525967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.321 qpair failed and we were unable to recover it. 00:22:23.322 [2024-05-15 01:09:35.526169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.526375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.526403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.322 qpair failed and we were unable to recover it. 00:22:23.322 [2024-05-15 01:09:35.526631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.526935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.526961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.322 qpair failed and we were unable to recover it. 00:22:23.322 [2024-05-15 01:09:35.527135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.527434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.527482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.322 qpair failed and we were unable to recover it. 00:22:23.322 [2024-05-15 01:09:35.527705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.527942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.527968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.322 qpair failed and we were unable to recover it. 00:22:23.322 [2024-05-15 01:09:35.528159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.528363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.528406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.322 qpair failed and we were unable to recover it. 00:22:23.322 [2024-05-15 01:09:35.528625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.528829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.528854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.322 qpair failed and we were unable to recover it. 00:22:23.322 [2024-05-15 01:09:35.529048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.529269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.529312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.322 qpair failed and we were unable to recover it. 00:22:23.322 [2024-05-15 01:09:35.529565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.529764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.529788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.322 qpair failed and we were unable to recover it. 00:22:23.322 [2024-05-15 01:09:35.529999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.530195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.530238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.322 qpair failed and we were unable to recover it. 00:22:23.322 [2024-05-15 01:09:35.530429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.530660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.530688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.322 qpair failed and we were unable to recover it. 00:22:23.322 [2024-05-15 01:09:35.530898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.531122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.531170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.322 qpair failed and we were unable to recover it. 00:22:23.322 [2024-05-15 01:09:35.531384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.531645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.531691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.322 qpair failed and we were unable to recover it. 00:22:23.322 [2024-05-15 01:09:35.531910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.532109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.532154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.322 qpair failed and we were unable to recover it. 00:22:23.322 [2024-05-15 01:09:35.532356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.532558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.532600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.322 qpair failed and we were unable to recover it. 00:22:23.322 [2024-05-15 01:09:35.532797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.533011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.533055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.322 qpair failed and we were unable to recover it. 00:22:23.322 [2024-05-15 01:09:35.533270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.533552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.533586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.322 qpair failed and we were unable to recover it. 00:22:23.322 [2024-05-15 01:09:35.533812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.534024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.534068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.322 qpair failed and we were unable to recover it. 00:22:23.322 [2024-05-15 01:09:35.534259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.534505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.534532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.322 qpair failed and we were unable to recover it. 00:22:23.322 [2024-05-15 01:09:35.534719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.534876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.534901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.322 qpair failed and we were unable to recover it. 00:22:23.322 [2024-05-15 01:09:35.535096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.535324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.535367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.322 qpair failed and we were unable to recover it. 00:22:23.322 [2024-05-15 01:09:35.535583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.535817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.535842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.322 qpair failed and we were unable to recover it. 00:22:23.322 [2024-05-15 01:09:35.536068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.536282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.536325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.322 qpair failed and we were unable to recover it. 00:22:23.322 [2024-05-15 01:09:35.536541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.536770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.536795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.322 qpair failed and we were unable to recover it. 00:22:23.322 [2024-05-15 01:09:35.536982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.537193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.537235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.322 qpair failed and we were unable to recover it. 00:22:23.322 [2024-05-15 01:09:35.537446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.537686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.537713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.322 qpair failed and we were unable to recover it. 00:22:23.322 [2024-05-15 01:09:35.537886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.538103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.538147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.322 qpair failed and we were unable to recover it. 00:22:23.322 [2024-05-15 01:09:35.538348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.538568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.538614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.322 qpair failed and we were unable to recover it. 00:22:23.322 [2024-05-15 01:09:35.538811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.539019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.539064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.322 qpair failed and we were unable to recover it. 00:22:23.322 [2024-05-15 01:09:35.539283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.539524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.539566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.322 qpair failed and we were unable to recover it. 00:22:23.322 [2024-05-15 01:09:35.539728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.322 [2024-05-15 01:09:35.539994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.540020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.323 qpair failed and we were unable to recover it. 00:22:23.323 [2024-05-15 01:09:35.540250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.540450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.540479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.323 qpair failed and we were unable to recover it. 00:22:23.323 [2024-05-15 01:09:35.540747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.540979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.541006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.323 qpair failed and we were unable to recover it. 00:22:23.323 [2024-05-15 01:09:35.541201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.541426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.541470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.323 qpair failed and we were unable to recover it. 00:22:23.323 [2024-05-15 01:09:35.541687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.541852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.541876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.323 qpair failed and we were unable to recover it. 00:22:23.323 [2024-05-15 01:09:35.542083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.542263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.542306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.323 qpair failed and we were unable to recover it. 00:22:23.323 [2024-05-15 01:09:35.542540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.542780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.542807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.323 qpair failed and we were unable to recover it. 00:22:23.323 [2024-05-15 01:09:35.543058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.543297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.543325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.323 qpair failed and we were unable to recover it. 00:22:23.323 [2024-05-15 01:09:35.543557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.543735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.543760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.323 qpair failed and we were unable to recover it. 00:22:23.323 [2024-05-15 01:09:35.543957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.544152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.544199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.323 qpair failed and we were unable to recover it. 00:22:23.323 [2024-05-15 01:09:35.544412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.544643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.544685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.323 qpair failed and we were unable to recover it. 00:22:23.323 [2024-05-15 01:09:35.544916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.545112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.545137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.323 qpair failed and we were unable to recover it. 00:22:23.323 [2024-05-15 01:09:35.545390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.545661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.545704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.323 qpair failed and we were unable to recover it. 00:22:23.323 [2024-05-15 01:09:35.545894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.546109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.546135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.323 qpair failed and we were unable to recover it. 00:22:23.323 [2024-05-15 01:09:35.546330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.546535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.546576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.323 qpair failed and we were unable to recover it. 00:22:23.323 [2024-05-15 01:09:35.546827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.547040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.547066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.323 qpair failed and we were unable to recover it. 00:22:23.323 [2024-05-15 01:09:35.547265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.547573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.547629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.323 qpair failed and we were unable to recover it. 00:22:23.323 [2024-05-15 01:09:35.547872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.548106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.548150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.323 qpair failed and we were unable to recover it. 00:22:23.323 [2024-05-15 01:09:35.548381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.548626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.548668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.323 qpair failed and we were unable to recover it. 00:22:23.323 [2024-05-15 01:09:35.548886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.549060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.549086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.323 qpair failed and we were unable to recover it. 00:22:23.323 [2024-05-15 01:09:35.549304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.549569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.549611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.323 qpair failed and we were unable to recover it. 00:22:23.323 [2024-05-15 01:09:35.549828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.549992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.550019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.323 qpair failed and we were unable to recover it. 00:22:23.323 [2024-05-15 01:09:35.550206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.550476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.550504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.323 qpair failed and we were unable to recover it. 00:22:23.323 [2024-05-15 01:09:35.550731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.550961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.550987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.323 qpair failed and we were unable to recover it. 00:22:23.323 [2024-05-15 01:09:35.551201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.551408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.551451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.323 qpair failed and we were unable to recover it. 00:22:23.323 [2024-05-15 01:09:35.551697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.551875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.551900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.323 qpair failed and we were unable to recover it. 00:22:23.323 [2024-05-15 01:09:35.552102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.552260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.552287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.323 qpair failed and we were unable to recover it. 00:22:23.323 [2024-05-15 01:09:35.552492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.552689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.552732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.323 qpair failed and we were unable to recover it. 00:22:23.323 [2024-05-15 01:09:35.552920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.553146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.553190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.323 qpair failed and we were unable to recover it. 00:22:23.323 [2024-05-15 01:09:35.553403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.553661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.553704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.323 qpair failed and we were unable to recover it. 00:22:23.323 [2024-05-15 01:09:35.553869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.323 [2024-05-15 01:09:35.554095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.554139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.324 qpair failed and we were unable to recover it. 00:22:23.324 [2024-05-15 01:09:35.554355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.554589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.554631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.324 qpair failed and we were unable to recover it. 00:22:23.324 [2024-05-15 01:09:35.554808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.554988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.555017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.324 qpair failed and we were unable to recover it. 00:22:23.324 [2024-05-15 01:09:35.555217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.555563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.555611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.324 qpair failed and we were unable to recover it. 00:22:23.324 [2024-05-15 01:09:35.555829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.556052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.556101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.324 qpair failed and we were unable to recover it. 00:22:23.324 [2024-05-15 01:09:35.556291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.556550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.556592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.324 qpair failed and we were unable to recover it. 00:22:23.324 [2024-05-15 01:09:35.556773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.556949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.556975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.324 qpair failed and we were unable to recover it. 00:22:23.324 [2024-05-15 01:09:35.557168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.557401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.557443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.324 qpair failed and we were unable to recover it. 00:22:23.324 [2024-05-15 01:09:35.557663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.557895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.557921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.324 qpair failed and we were unable to recover it. 00:22:23.324 [2024-05-15 01:09:35.558118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.558383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.558425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.324 qpair failed and we were unable to recover it. 00:22:23.324 [2024-05-15 01:09:35.558668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.558874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.558899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.324 qpair failed and we were unable to recover it. 00:22:23.324 [2024-05-15 01:09:35.559083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.559270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.559313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.324 qpair failed and we were unable to recover it. 00:22:23.324 [2024-05-15 01:09:35.559558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.559920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.559997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.324 qpair failed and we were unable to recover it. 00:22:23.324 [2024-05-15 01:09:35.560220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.560433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.560476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.324 qpair failed and we were unable to recover it. 00:22:23.324 [2024-05-15 01:09:35.560780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.561010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.561035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.324 qpair failed and we were unable to recover it. 00:22:23.324 [2024-05-15 01:09:35.561244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.561619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.561681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.324 qpair failed and we were unable to recover it. 00:22:23.324 [2024-05-15 01:09:35.561875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.562078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.562104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.324 qpair failed and we were unable to recover it. 00:22:23.324 [2024-05-15 01:09:35.562327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.562554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.562597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.324 qpair failed and we were unable to recover it. 00:22:23.324 [2024-05-15 01:09:35.562796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.562955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.562982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.324 qpair failed and we were unable to recover it. 00:22:23.324 [2024-05-15 01:09:35.563227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.563532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.563581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.324 qpair failed and we were unable to recover it. 00:22:23.324 [2024-05-15 01:09:35.563775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.563991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.564016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.324 qpair failed and we were unable to recover it. 00:22:23.324 [2024-05-15 01:09:35.564233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.564465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.564492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.324 qpair failed and we were unable to recover it. 00:22:23.324 [2024-05-15 01:09:35.564760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.564982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.565012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.324 qpair failed and we were unable to recover it. 00:22:23.324 [2024-05-15 01:09:35.565244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.565445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.565487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.324 qpair failed and we were unable to recover it. 00:22:23.324 [2024-05-15 01:09:35.565733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.565945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.565971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.324 qpair failed and we were unable to recover it. 00:22:23.324 [2024-05-15 01:09:35.566130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.566347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.566389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.324 qpair failed and we were unable to recover it. 00:22:23.324 [2024-05-15 01:09:35.566731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.566975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.567001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.324 qpair failed and we were unable to recover it. 00:22:23.324 [2024-05-15 01:09:35.567187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.567420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.567462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.324 qpair failed and we were unable to recover it. 00:22:23.324 [2024-05-15 01:09:35.567692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.567875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.567900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.324 qpair failed and we were unable to recover it. 00:22:23.324 [2024-05-15 01:09:35.568100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.568313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.324 [2024-05-15 01:09:35.568356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.325 qpair failed and we were unable to recover it. 00:22:23.325 [2024-05-15 01:09:35.568719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.568960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.568986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.325 qpair failed and we were unable to recover it. 00:22:23.325 [2024-05-15 01:09:35.569178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.569453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.569512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.325 qpair failed and we were unable to recover it. 00:22:23.325 [2024-05-15 01:09:35.569732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.569936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.569962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.325 qpair failed and we were unable to recover it. 00:22:23.325 [2024-05-15 01:09:35.570126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.570382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.570424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.325 qpair failed and we were unable to recover it. 00:22:23.325 [2024-05-15 01:09:35.570670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.570884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.570909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.325 qpair failed and we were unable to recover it. 00:22:23.325 [2024-05-15 01:09:35.571086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.571302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.571344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.325 qpair failed and we were unable to recover it. 00:22:23.325 [2024-05-15 01:09:35.571589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.571811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.571836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.325 qpair failed and we were unable to recover it. 00:22:23.325 [2024-05-15 01:09:35.572001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.572224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.572265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.325 qpair failed and we were unable to recover it. 00:22:23.325 [2024-05-15 01:09:35.572512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.572813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.572856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.325 qpair failed and we were unable to recover it. 00:22:23.325 [2024-05-15 01:09:35.573028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.573248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.573291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.325 qpair failed and we were unable to recover it. 00:22:23.325 [2024-05-15 01:09:35.573508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.573864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.573923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.325 qpair failed and we were unable to recover it. 00:22:23.325 [2024-05-15 01:09:35.574145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.574325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.574371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.325 qpair failed and we were unable to recover it. 00:22:23.325 [2024-05-15 01:09:35.574593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.574806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.574831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.325 qpair failed and we were unable to recover it. 00:22:23.325 [2024-05-15 01:09:35.575003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.575243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.575287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.325 qpair failed and we were unable to recover it. 00:22:23.325 [2024-05-15 01:09:35.575492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.575791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.575836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.325 qpair failed and we were unable to recover it. 00:22:23.325 [2024-05-15 01:09:35.576052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.576264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.576306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.325 qpair failed and we were unable to recover it. 00:22:23.325 [2024-05-15 01:09:35.576525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.576751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.576792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.325 qpair failed and we were unable to recover it. 00:22:23.325 [2024-05-15 01:09:35.576996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.577232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.577274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.325 qpair failed and we were unable to recover it. 00:22:23.325 [2024-05-15 01:09:35.577525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.577779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.577821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.325 qpair failed and we were unable to recover it. 00:22:23.325 [2024-05-15 01:09:35.578041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.578247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.578288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.325 qpair failed and we were unable to recover it. 00:22:23.325 [2024-05-15 01:09:35.578531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.578788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.578830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.325 qpair failed and we were unable to recover it. 00:22:23.325 [2024-05-15 01:09:35.579011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.579261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.579309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.325 qpair failed and we were unable to recover it. 00:22:23.325 [2024-05-15 01:09:35.579547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.579727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.579751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.325 qpair failed and we were unable to recover it. 00:22:23.325 [2024-05-15 01:09:35.579942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.580184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.580227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.325 qpair failed and we were unable to recover it. 00:22:23.325 [2024-05-15 01:09:35.580449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.580648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.325 [2024-05-15 01:09:35.580676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.325 qpair failed and we were unable to recover it. 00:22:23.326 [2024-05-15 01:09:35.580849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.581088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.581117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.326 qpair failed and we were unable to recover it. 00:22:23.326 [2024-05-15 01:09:35.581349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.581662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.581721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.326 qpair failed and we were unable to recover it. 00:22:23.326 [2024-05-15 01:09:35.581943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.582162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.582205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.326 qpair failed and we were unable to recover it. 00:22:23.326 [2024-05-15 01:09:35.582410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.582592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.582618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.326 qpair failed and we were unable to recover it. 00:22:23.326 [2024-05-15 01:09:35.582809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.583012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.583055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.326 qpair failed and we were unable to recover it. 00:22:23.326 [2024-05-15 01:09:35.583247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.583478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.583507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.326 qpair failed and we were unable to recover it. 00:22:23.326 [2024-05-15 01:09:35.583714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.583924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.583960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.326 qpair failed and we were unable to recover it. 00:22:23.326 [2024-05-15 01:09:35.584151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.584393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.584420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.326 qpair failed and we were unable to recover it. 00:22:23.326 [2024-05-15 01:09:35.584620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.584826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.584852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.326 qpair failed and we were unable to recover it. 00:22:23.326 [2024-05-15 01:09:35.585069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.585331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.585373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.326 qpair failed and we were unable to recover it. 00:22:23.326 [2024-05-15 01:09:35.585579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.585817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.585842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.326 qpair failed and we were unable to recover it. 00:22:23.326 [2024-05-15 01:09:35.586054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.586263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.586305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.326 qpair failed and we were unable to recover it. 00:22:23.326 [2024-05-15 01:09:35.586548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.586764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.586790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.326 qpair failed and we were unable to recover it. 00:22:23.326 [2024-05-15 01:09:35.586982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.587221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.587249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.326 qpair failed and we were unable to recover it. 00:22:23.326 [2024-05-15 01:09:35.587515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.587722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.587748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.326 qpair failed and we were unable to recover it. 00:22:23.326 [2024-05-15 01:09:35.587940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.588183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.588227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.326 qpair failed and we were unable to recover it. 00:22:23.326 [2024-05-15 01:09:35.588449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.588662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.588708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.326 qpair failed and we were unable to recover it. 00:22:23.326 [2024-05-15 01:09:35.588900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.589101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.589128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.326 qpair failed and we were unable to recover it. 00:22:23.326 [2024-05-15 01:09:35.589353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.589649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.589691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.326 qpair failed and we were unable to recover it. 00:22:23.326 [2024-05-15 01:09:35.589889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.590083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.590110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.326 qpair failed and we were unable to recover it. 00:22:23.326 [2024-05-15 01:09:35.590301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.590602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.590630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.326 qpair failed and we were unable to recover it. 00:22:23.326 [2024-05-15 01:09:35.590858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.591056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.591082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.326 qpair failed and we were unable to recover it. 00:22:23.326 [2024-05-15 01:09:35.591264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.591493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.591535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.326 qpair failed and we were unable to recover it. 00:22:23.326 [2024-05-15 01:09:35.591812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.592208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.592251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.326 qpair failed and we were unable to recover it. 00:22:23.326 [2024-05-15 01:09:35.592465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.592726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.592781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.326 qpair failed and we were unable to recover it. 00:22:23.326 [2024-05-15 01:09:35.592981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.593401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.593463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.326 qpair failed and we were unable to recover it. 00:22:23.326 [2024-05-15 01:09:35.593677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.593908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.593952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.326 qpair failed and we were unable to recover it. 00:22:23.326 [2024-05-15 01:09:35.594183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.594535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.594591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.326 qpair failed and we were unable to recover it. 00:22:23.326 [2024-05-15 01:09:35.594821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.595070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.595096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.326 qpair failed and we were unable to recover it. 00:22:23.326 [2024-05-15 01:09:35.595296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.326 [2024-05-15 01:09:35.595528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.595569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.327 qpair failed and we were unable to recover it. 00:22:23.327 [2024-05-15 01:09:35.595827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.596057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.596082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.327 qpair failed and we were unable to recover it. 00:22:23.327 [2024-05-15 01:09:35.596275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.596511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.596538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.327 qpair failed and we were unable to recover it. 00:22:23.327 [2024-05-15 01:09:35.596758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.596964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.596988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.327 qpair failed and we were unable to recover it. 00:22:23.327 [2024-05-15 01:09:35.597204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.597474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.597516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.327 qpair failed and we were unable to recover it. 00:22:23.327 [2024-05-15 01:09:35.597700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.597937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.597978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.327 qpair failed and we were unable to recover it. 00:22:23.327 [2024-05-15 01:09:35.598178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.598523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.598570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.327 qpair failed and we were unable to recover it. 00:22:23.327 [2024-05-15 01:09:35.598814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.599043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.599069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.327 qpair failed and we were unable to recover it. 00:22:23.327 [2024-05-15 01:09:35.599285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.599525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.599552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.327 qpair failed and we were unable to recover it. 00:22:23.327 [2024-05-15 01:09:35.599779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.600033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.600059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.327 qpair failed and we were unable to recover it. 00:22:23.327 [2024-05-15 01:09:35.600249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.600437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.600462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.327 qpair failed and we were unable to recover it. 00:22:23.327 [2024-05-15 01:09:35.600703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.600915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.600945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.327 qpair failed and we were unable to recover it. 00:22:23.327 [2024-05-15 01:09:35.601113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.601300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.601344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.327 qpair failed and we were unable to recover it. 00:22:23.327 [2024-05-15 01:09:35.601589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.601879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.601944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.327 qpair failed and we were unable to recover it. 00:22:23.327 [2024-05-15 01:09:35.602144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.602361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.602409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.327 qpair failed and we were unable to recover it. 00:22:23.327 [2024-05-15 01:09:35.602627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.602836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.602861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.327 qpair failed and we were unable to recover it. 00:22:23.327 [2024-05-15 01:09:35.603078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.603257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.603300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.327 qpair failed and we were unable to recover it. 00:22:23.327 [2024-05-15 01:09:35.603509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.603740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.603784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.327 qpair failed and we were unable to recover it. 00:22:23.327 [2024-05-15 01:09:35.604102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.604462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.604525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.327 qpair failed and we were unable to recover it. 00:22:23.327 [2024-05-15 01:09:35.604744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.604957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.604983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.327 qpair failed and we were unable to recover it. 00:22:23.327 [2024-05-15 01:09:35.605143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.605334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.605377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.327 qpair failed and we were unable to recover it. 00:22:23.327 [2024-05-15 01:09:35.605568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.605775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.605800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.327 qpair failed and we were unable to recover it. 00:22:23.327 [2024-05-15 01:09:35.606011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.606214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.606257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.327 qpair failed and we were unable to recover it. 00:22:23.327 [2024-05-15 01:09:35.606507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.606809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.606856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.327 qpair failed and we were unable to recover it. 00:22:23.327 [2024-05-15 01:09:35.607100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.607423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.607477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.327 qpair failed and we were unable to recover it. 00:22:23.327 [2024-05-15 01:09:35.607686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.607891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.607916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.327 qpair failed and we were unable to recover it. 00:22:23.327 [2024-05-15 01:09:35.608117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.608345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.608389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.327 qpair failed and we were unable to recover it. 00:22:23.327 [2024-05-15 01:09:35.608606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.608847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.608872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.327 qpair failed and we were unable to recover it. 00:22:23.327 [2024-05-15 01:09:35.609066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.609420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.609478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.327 qpair failed and we were unable to recover it. 00:22:23.327 [2024-05-15 01:09:35.609690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.609870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.327 [2024-05-15 01:09:35.609899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.327 qpair failed and we were unable to recover it. 00:22:23.327 [2024-05-15 01:09:35.610171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.610371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.610413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.328 qpair failed and we were unable to recover it. 00:22:23.328 [2024-05-15 01:09:35.610639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.610848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.610872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.328 qpair failed and we were unable to recover it. 00:22:23.328 [2024-05-15 01:09:35.611076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.611280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.611323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.328 qpair failed and we were unable to recover it. 00:22:23.328 [2024-05-15 01:09:35.611534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.611747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.611772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.328 qpair failed and we were unable to recover it. 00:22:23.328 [2024-05-15 01:09:35.611934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.612120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.612145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.328 qpair failed and we were unable to recover it. 00:22:23.328 [2024-05-15 01:09:35.612383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.612772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.612829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.328 qpair failed and we were unable to recover it. 00:22:23.328 [2024-05-15 01:09:35.613030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.613272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.613315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.328 qpair failed and we were unable to recover it. 00:22:23.328 [2024-05-15 01:09:35.613557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.613910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.613966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.328 qpair failed and we were unable to recover it. 00:22:23.328 [2024-05-15 01:09:35.614195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.614409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.614451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.328 qpair failed and we were unable to recover it. 00:22:23.328 [2024-05-15 01:09:35.614636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.614889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.614914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.328 qpair failed and we were unable to recover it. 00:22:23.328 [2024-05-15 01:09:35.615096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.615368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.615412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.328 qpair failed and we were unable to recover it. 00:22:23.328 [2024-05-15 01:09:35.615618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.615819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.615843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.328 qpair failed and we were unable to recover it. 00:22:23.328 [2024-05-15 01:09:35.616036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.616224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.616266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.328 qpair failed and we were unable to recover it. 00:22:23.328 [2024-05-15 01:09:35.616474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.616671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.616714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.328 qpair failed and we were unable to recover it. 00:22:23.328 [2024-05-15 01:09:35.616942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.617155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.617180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.328 qpair failed and we were unable to recover it. 00:22:23.328 [2024-05-15 01:09:35.617434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.617881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.617935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.328 qpair failed and we were unable to recover it. 00:22:23.328 [2024-05-15 01:09:35.618132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.618328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.618371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.328 qpair failed and we were unable to recover it. 00:22:23.328 [2024-05-15 01:09:35.618617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.618848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.618891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.328 qpair failed and we were unable to recover it. 00:22:23.328 [2024-05-15 01:09:35.619165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.619390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.619435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.328 qpair failed and we were unable to recover it. 00:22:23.328 [2024-05-15 01:09:35.619658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.619925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.619954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.328 qpair failed and we were unable to recover it. 00:22:23.328 [2024-05-15 01:09:35.620151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.620354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.620396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.328 qpair failed and we were unable to recover it. 00:22:23.328 [2024-05-15 01:09:35.620589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.620984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.621032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.328 qpair failed and we were unable to recover it. 00:22:23.328 [2024-05-15 01:09:35.621233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.621445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.621487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.328 qpair failed and we were unable to recover it. 00:22:23.328 [2024-05-15 01:09:35.621680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.621937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.621977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.328 qpair failed and we were unable to recover it. 00:22:23.328 [2024-05-15 01:09:35.622169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.622397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.622439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.328 qpair failed and we were unable to recover it. 00:22:23.328 [2024-05-15 01:09:35.622664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.622913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.622943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.328 qpair failed and we were unable to recover it. 00:22:23.328 [2024-05-15 01:09:35.623141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.623346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.623389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.328 qpair failed and we were unable to recover it. 00:22:23.328 [2024-05-15 01:09:35.623591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.623826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.623870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.328 qpair failed and we were unable to recover it. 00:22:23.328 [2024-05-15 01:09:35.624090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.624343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.624385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.328 qpair failed and we were unable to recover it. 00:22:23.328 [2024-05-15 01:09:35.624603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.624822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.328 [2024-05-15 01:09:35.624847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.328 qpair failed and we were unable to recover it. 00:22:23.329 [2024-05-15 01:09:35.625137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.625387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.625429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.329 qpair failed and we were unable to recover it. 00:22:23.329 [2024-05-15 01:09:35.625687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.625893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.625918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.329 qpair failed and we were unable to recover it. 00:22:23.329 [2024-05-15 01:09:35.626104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.626348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.626375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.329 qpair failed and we were unable to recover it. 00:22:23.329 [2024-05-15 01:09:35.626654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.626847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.626870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.329 qpair failed and we were unable to recover it. 00:22:23.329 [2024-05-15 01:09:35.627087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.627304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.627347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.329 qpair failed and we were unable to recover it. 00:22:23.329 [2024-05-15 01:09:35.627607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.627823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.627849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.329 qpair failed and we were unable to recover it. 00:22:23.329 [2024-05-15 01:09:35.628113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.628401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.628453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.329 qpair failed and we were unable to recover it. 00:22:23.329 [2024-05-15 01:09:35.628720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.628913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.628947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.329 qpair failed and we were unable to recover it. 00:22:23.329 [2024-05-15 01:09:35.629178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.629401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.629427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.329 qpair failed and we were unable to recover it. 00:22:23.329 [2024-05-15 01:09:35.629677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.629917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.629950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.329 qpair failed and we were unable to recover it. 00:22:23.329 [2024-05-15 01:09:35.630124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.630351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.630404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.329 qpair failed and we were unable to recover it. 00:22:23.329 [2024-05-15 01:09:35.630585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.630801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.630828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.329 qpair failed and we were unable to recover it. 00:22:23.329 [2024-05-15 01:09:35.631044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.631265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.631307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.329 qpair failed and we were unable to recover it. 00:22:23.329 [2024-05-15 01:09:35.631525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.631702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.631728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.329 qpair failed and we were unable to recover it. 00:22:23.329 [2024-05-15 01:09:35.631920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.632132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.632175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.329 qpair failed and we were unable to recover it. 00:22:23.329 [2024-05-15 01:09:35.632384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.632595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.632637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.329 qpair failed and we were unable to recover it. 00:22:23.329 [2024-05-15 01:09:35.632834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.633025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.633050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.329 qpair failed and we were unable to recover it. 00:22:23.329 [2024-05-15 01:09:35.633296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.633556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.633597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.329 qpair failed and we were unable to recover it. 00:22:23.329 [2024-05-15 01:09:35.633792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.634037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.634080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.329 qpair failed and we were unable to recover it. 00:22:23.329 [2024-05-15 01:09:35.634314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.634609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.634672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.329 qpair failed and we were unable to recover it. 00:22:23.329 [2024-05-15 01:09:35.634918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.635149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.635175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.329 qpair failed and we were unable to recover it. 00:22:23.329 [2024-05-15 01:09:35.635418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.635713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.635779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.329 qpair failed and we were unable to recover it. 00:22:23.329 [2024-05-15 01:09:35.636026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.636242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.636284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.329 qpair failed and we were unable to recover it. 00:22:23.329 [2024-05-15 01:09:35.636483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.636742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.636783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.329 qpair failed and we were unable to recover it. 00:22:23.329 [2024-05-15 01:09:35.637011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.637279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.329 [2024-05-15 01:09:35.637320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.329 qpair failed and we were unable to recover it. 00:22:23.330 [2024-05-15 01:09:35.637577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.637831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.637870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.330 qpair failed and we were unable to recover it. 00:22:23.330 [2024-05-15 01:09:35.638092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.638296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.638340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.330 qpair failed and we were unable to recover it. 00:22:23.330 [2024-05-15 01:09:35.638588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.638790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.638815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.330 qpair failed and we were unable to recover it. 00:22:23.330 [2024-05-15 01:09:35.639053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.639269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.639313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.330 qpair failed and we were unable to recover it. 00:22:23.330 [2024-05-15 01:09:35.639554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.639762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.639787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.330 qpair failed and we were unable to recover it. 00:22:23.330 [2024-05-15 01:09:35.640005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.640246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.640273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.330 qpair failed and we were unable to recover it. 00:22:23.330 [2024-05-15 01:09:35.640472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.640681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.640706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.330 qpair failed and we were unable to recover it. 00:22:23.330 [2024-05-15 01:09:35.640885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.641103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.641149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.330 qpair failed and we were unable to recover it. 00:22:23.330 [2024-05-15 01:09:35.641331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.641535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.641578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.330 qpair failed and we were unable to recover it. 00:22:23.330 [2024-05-15 01:09:35.641764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.641974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.642017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.330 qpair failed and we were unable to recover it. 00:22:23.330 [2024-05-15 01:09:35.642245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.642449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.642491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.330 qpair failed and we were unable to recover it. 00:22:23.330 [2024-05-15 01:09:35.642647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.642833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.642858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.330 qpair failed and we were unable to recover it. 00:22:23.330 [2024-05-15 01:09:35.643021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.643238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.643280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.330 qpair failed and we were unable to recover it. 00:22:23.330 [2024-05-15 01:09:35.643559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.643763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.643788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.330 qpair failed and we were unable to recover it. 00:22:23.330 [2024-05-15 01:09:35.643944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.644188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.644230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.330 qpair failed and we were unable to recover it. 00:22:23.330 [2024-05-15 01:09:35.644456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.644698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.644740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.330 qpair failed and we were unable to recover it. 00:22:23.330 [2024-05-15 01:09:35.644943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.645131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.645156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.330 qpair failed and we were unable to recover it. 00:22:23.330 [2024-05-15 01:09:35.645372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.645610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.645654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.330 qpair failed and we were unable to recover it. 00:22:23.330 [2024-05-15 01:09:35.645836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.646027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.646053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.330 qpair failed and we were unable to recover it. 00:22:23.330 [2024-05-15 01:09:35.646257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.646468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.646510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.330 qpair failed and we were unable to recover it. 00:22:23.330 [2024-05-15 01:09:35.646725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.646933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.646959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.330 qpair failed and we were unable to recover it. 00:22:23.330 [2024-05-15 01:09:35.647151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.647381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.647424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.330 qpair failed and we were unable to recover it. 00:22:23.330 [2024-05-15 01:09:35.647629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.647803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.647830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.330 qpair failed and we were unable to recover it. 00:22:23.330 [2024-05-15 01:09:35.648069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.648327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.648373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.330 qpair failed and we were unable to recover it. 00:22:23.330 [2024-05-15 01:09:35.648587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.648794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.648819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.330 qpair failed and we were unable to recover it. 00:22:23.330 [2024-05-15 01:09:35.649002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.649226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.649269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.330 qpair failed and we were unable to recover it. 00:22:23.330 [2024-05-15 01:09:35.649488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.649668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.649693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.330 qpair failed and we were unable to recover it. 00:22:23.330 [2024-05-15 01:09:35.649885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.330 [2024-05-15 01:09:35.650078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.650121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.331 qpair failed and we were unable to recover it. 00:22:23.331 [2024-05-15 01:09:35.650342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.650542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.650584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.331 qpair failed and we were unable to recover it. 00:22:23.331 [2024-05-15 01:09:35.650743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.650935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.650961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.331 qpair failed and we were unable to recover it. 00:22:23.331 [2024-05-15 01:09:35.651125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.651301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.651344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.331 qpair failed and we were unable to recover it. 00:22:23.331 [2024-05-15 01:09:35.651587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.651763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.651788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.331 qpair failed and we were unable to recover it. 00:22:23.331 [2024-05-15 01:09:35.652041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.652256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.652299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.331 qpair failed and we were unable to recover it. 00:22:23.331 [2024-05-15 01:09:35.652549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.652757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.652786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.331 qpair failed and we were unable to recover it. 00:22:23.331 [2024-05-15 01:09:35.653023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.653264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.653292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.331 qpair failed and we were unable to recover it. 00:22:23.331 [2024-05-15 01:09:35.653499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.653740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.653767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.331 qpair failed and we were unable to recover it. 00:22:23.331 [2024-05-15 01:09:35.653926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.654149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.654192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.331 qpair failed and we were unable to recover it. 00:22:23.331 [2024-05-15 01:09:35.654433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.654663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.654706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.331 qpair failed and we were unable to recover it. 00:22:23.331 [2024-05-15 01:09:35.654895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.655115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.655159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.331 qpair failed and we were unable to recover it. 00:22:23.331 [2024-05-15 01:09:35.655359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.655607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.655654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.331 qpair failed and we were unable to recover it. 00:22:23.331 [2024-05-15 01:09:35.655845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.656059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.656102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.331 qpair failed and we were unable to recover it. 00:22:23.331 [2024-05-15 01:09:35.656290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.656517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.656560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.331 qpair failed and we were unable to recover it. 00:22:23.331 [2024-05-15 01:09:35.656722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.656886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.656911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.331 qpair failed and we were unable to recover it. 00:22:23.331 [2024-05-15 01:09:35.657142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.657377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.657425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.331 qpair failed and we were unable to recover it. 00:22:23.331 [2024-05-15 01:09:35.657654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.657837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.657864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.331 qpair failed and we were unable to recover it. 00:22:23.331 [2024-05-15 01:09:35.658032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.658272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.658316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.331 qpair failed and we were unable to recover it. 00:22:23.331 [2024-05-15 01:09:35.658537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.658715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.658741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.331 qpair failed and we were unable to recover it. 00:22:23.331 [2024-05-15 01:09:35.658955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.659163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.659206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.331 qpair failed and we were unable to recover it. 00:22:23.331 [2024-05-15 01:09:35.659421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.659658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.659686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.331 qpair failed and we were unable to recover it. 00:22:23.331 [2024-05-15 01:09:35.659866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.660055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.660099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.331 qpair failed and we were unable to recover it. 00:22:23.331 [2024-05-15 01:09:35.660291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.660531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.660575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.331 qpair failed and we were unable to recover it. 00:22:23.331 [2024-05-15 01:09:35.660740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.660951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.660977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.331 qpair failed and we were unable to recover it. 00:22:23.331 [2024-05-15 01:09:35.661190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.661401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.661443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.331 qpair failed and we were unable to recover it. 00:22:23.331 [2024-05-15 01:09:35.661650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.661827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.661856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.331 qpair failed and we were unable to recover it. 00:22:23.331 [2024-05-15 01:09:35.662057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.662261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.662304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.331 qpair failed and we were unable to recover it. 00:22:23.331 [2024-05-15 01:09:35.662491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.662684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.662726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.331 qpair failed and we were unable to recover it. 00:22:23.331 [2024-05-15 01:09:35.662892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.663085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.331 [2024-05-15 01:09:35.663128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.332 qpair failed and we were unable to recover it. 00:22:23.332 [2024-05-15 01:09:35.663289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.663472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.663516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.332 qpair failed and we were unable to recover it. 00:22:23.332 [2024-05-15 01:09:35.663736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.663910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.663943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.332 qpair failed and we were unable to recover it. 00:22:23.332 [2024-05-15 01:09:35.664159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.664386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.664428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.332 qpair failed and we were unable to recover it. 00:22:23.332 [2024-05-15 01:09:35.664625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.664831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.664856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.332 qpair failed and we were unable to recover it. 00:22:23.332 [2024-05-15 01:09:35.665048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.665262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.665307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.332 qpair failed and we were unable to recover it. 00:22:23.332 [2024-05-15 01:09:35.665508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.665716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.665758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.332 qpair failed and we were unable to recover it. 00:22:23.332 [2024-05-15 01:09:35.665970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.666169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.666217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.332 qpair failed and we were unable to recover it. 00:22:23.332 [2024-05-15 01:09:35.666435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.666640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.666668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.332 qpair failed and we were unable to recover it. 00:22:23.332 [2024-05-15 01:09:35.666861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.667020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.667045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.332 qpair failed and we were unable to recover it. 00:22:23.332 [2024-05-15 01:09:35.667287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.667600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.667650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.332 qpair failed and we were unable to recover it. 00:22:23.332 [2024-05-15 01:09:35.667840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.668058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.668084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.332 qpair failed and we were unable to recover it. 00:22:23.332 [2024-05-15 01:09:35.668281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.668515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.668559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.332 qpair failed and we were unable to recover it. 00:22:23.332 [2024-05-15 01:09:35.668756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.668947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.668973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.332 qpair failed and we were unable to recover it. 00:22:23.332 [2024-05-15 01:09:35.669189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.669410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.669454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.332 qpair failed and we were unable to recover it. 00:22:23.332 [2024-05-15 01:09:35.669632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.669845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.669870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.332 qpair failed and we were unable to recover it. 00:22:23.332 [2024-05-15 01:09:35.670043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.670236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.670280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.332 qpair failed and we were unable to recover it. 00:22:23.332 [2024-05-15 01:09:35.670478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.670711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.670739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.332 qpair failed and we were unable to recover it. 00:22:23.332 [2024-05-15 01:09:35.670959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.671150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.671178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.332 qpair failed and we were unable to recover it. 00:22:23.332 [2024-05-15 01:09:35.671422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.671638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.671681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.332 qpair failed and we were unable to recover it. 00:22:23.332 [2024-05-15 01:09:35.671873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.672055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.672080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.332 qpair failed and we were unable to recover it. 00:22:23.332 [2024-05-15 01:09:35.672269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.672482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.672526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.332 qpair failed and we were unable to recover it. 00:22:23.332 [2024-05-15 01:09:35.672686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.672859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.672884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.332 qpair failed and we were unable to recover it. 00:22:23.332 [2024-05-15 01:09:35.673076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.673309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.673352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.332 qpair failed and we were unable to recover it. 00:22:23.332 [2024-05-15 01:09:35.673574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.673767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.673792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.332 qpair failed and we were unable to recover it. 00:22:23.332 [2024-05-15 01:09:35.673957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.674166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.674210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.332 qpair failed and we were unable to recover it. 00:22:23.332 [2024-05-15 01:09:35.674407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.674764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.674815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.332 qpair failed and we were unable to recover it. 00:22:23.332 [2024-05-15 01:09:35.675023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.675246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.675289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.332 qpair failed and we were unable to recover it. 00:22:23.332 [2024-05-15 01:09:35.675516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.332 [2024-05-15 01:09:35.675699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.675726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.333 qpair failed and we were unable to recover it. 00:22:23.333 [2024-05-15 01:09:35.675895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.676112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.676155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.333 qpair failed and we were unable to recover it. 00:22:23.333 [2024-05-15 01:09:35.676369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.676568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.676611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.333 qpair failed and we were unable to recover it. 00:22:23.333 [2024-05-15 01:09:35.676837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.677053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.677097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.333 qpair failed and we were unable to recover it. 00:22:23.333 [2024-05-15 01:09:35.677319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.677524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.677568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.333 qpair failed and we were unable to recover it. 00:22:23.333 [2024-05-15 01:09:35.677757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.677942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.677968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.333 qpair failed and we were unable to recover it. 00:22:23.333 [2024-05-15 01:09:35.678153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.678354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.678396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.333 qpair failed and we were unable to recover it. 00:22:23.333 [2024-05-15 01:09:35.678608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.678835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.678860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.333 qpair failed and we were unable to recover it. 00:22:23.333 [2024-05-15 01:09:35.679051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.679269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.679311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.333 qpair failed and we were unable to recover it. 00:22:23.333 [2024-05-15 01:09:35.679584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.679820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.679845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.333 qpair failed and we were unable to recover it. 00:22:23.333 [2024-05-15 01:09:35.680069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.680266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.680309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.333 qpair failed and we were unable to recover it. 00:22:23.333 [2024-05-15 01:09:35.680504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.680740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.680784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.333 qpair failed and we were unable to recover it. 00:22:23.333 [2024-05-15 01:09:35.680971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.681181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.681224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.333 qpair failed and we were unable to recover it. 00:22:23.333 [2024-05-15 01:09:35.681411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.681639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.681682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.333 qpair failed and we were unable to recover it. 00:22:23.333 [2024-05-15 01:09:35.681871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.682053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.682097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.333 qpair failed and we were unable to recover it. 00:22:23.333 [2024-05-15 01:09:35.682287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.682639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.682691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.333 qpair failed and we were unable to recover it. 00:22:23.333 [2024-05-15 01:09:35.682863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.683074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.683118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.333 qpair failed and we were unable to recover it. 00:22:23.333 [2024-05-15 01:09:35.683334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.683687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.683743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.333 qpair failed and we were unable to recover it. 00:22:23.333 [2024-05-15 01:09:35.683939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.684125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.684169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.333 qpair failed and we were unable to recover it. 00:22:23.333 [2024-05-15 01:09:35.684362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.684576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.684619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.333 qpair failed and we were unable to recover it. 00:22:23.333 [2024-05-15 01:09:35.684817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.684988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.685014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.333 qpair failed and we were unable to recover it. 00:22:23.333 [2024-05-15 01:09:35.685228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.685432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.685475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.333 qpair failed and we were unable to recover it. 00:22:23.333 [2024-05-15 01:09:35.685669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.685885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.685910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.333 qpair failed and we were unable to recover it. 00:22:23.333 [2024-05-15 01:09:35.686077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.686254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.686282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.333 qpair failed and we were unable to recover it. 00:22:23.333 [2024-05-15 01:09:35.686514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.686772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.686801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.333 qpair failed and we were unable to recover it. 00:22:23.333 [2024-05-15 01:09:35.686987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.687248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.687292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.333 qpair failed and we were unable to recover it. 00:22:23.333 [2024-05-15 01:09:35.687496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.687708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.687750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.333 qpair failed and we were unable to recover it. 00:22:23.333 [2024-05-15 01:09:35.687948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.688136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.688162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.333 qpair failed and we were unable to recover it. 00:22:23.333 [2024-05-15 01:09:35.688356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.333 [2024-05-15 01:09:35.688571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.334 [2024-05-15 01:09:35.688614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.334 qpair failed and we were unable to recover it. 00:22:23.334 [2024-05-15 01:09:35.688809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.334 [2024-05-15 01:09:35.689004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.334 [2024-05-15 01:09:35.689030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.334 qpair failed and we were unable to recover it. 00:22:23.334 [2024-05-15 01:09:35.689265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.334 [2024-05-15 01:09:35.689477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.334 [2024-05-15 01:09:35.689520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.334 qpair failed and we were unable to recover it. 00:22:23.334 [2024-05-15 01:09:35.689711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.334 [2024-05-15 01:09:35.689918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.334 [2024-05-15 01:09:35.689952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.334 qpair failed and we were unable to recover it. 00:22:23.334 [2024-05-15 01:09:35.690141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.334 [2024-05-15 01:09:35.690378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.334 [2024-05-15 01:09:35.690406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.334 qpair failed and we were unable to recover it. 00:22:23.334 [2024-05-15 01:09:35.690653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.334 [2024-05-15 01:09:35.690875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.334 [2024-05-15 01:09:35.690900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.334 qpair failed and we were unable to recover it. 00:22:23.334 [2024-05-15 01:09:35.691120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.334 [2024-05-15 01:09:35.691329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.334 [2024-05-15 01:09:35.691374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.334 qpair failed and we were unable to recover it. 00:22:23.334 [2024-05-15 01:09:35.691592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.605 [2024-05-15 01:09:35.691771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.605 [2024-05-15 01:09:35.691796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.605 qpair failed and we were unable to recover it. 00:22:23.605 [2024-05-15 01:09:35.691991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.605 [2024-05-15 01:09:35.692194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.692239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.606 qpair failed and we were unable to recover it. 00:22:23.606 [2024-05-15 01:09:35.692454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.692664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.692707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.606 qpair failed and we were unable to recover it. 00:22:23.606 [2024-05-15 01:09:35.692944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.693142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.693185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.606 qpair failed and we were unable to recover it. 00:22:23.606 [2024-05-15 01:09:35.693401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.693612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.693655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.606 qpair failed and we were unable to recover it. 00:22:23.606 [2024-05-15 01:09:35.693843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.694056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.694100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.606 qpair failed and we were unable to recover it. 00:22:23.606 [2024-05-15 01:09:35.694299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.694504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.694549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.606 qpair failed and we were unable to recover it. 00:22:23.606 [2024-05-15 01:09:35.694717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.694880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.694905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.606 qpair failed and we were unable to recover it. 00:22:23.606 [2024-05-15 01:09:35.695117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.695318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.695361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.606 qpair failed and we were unable to recover it. 00:22:23.606 [2024-05-15 01:09:35.695575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.695806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.695831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.606 qpair failed and we were unable to recover it. 00:22:23.606 [2024-05-15 01:09:35.696046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.696247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.696290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.606 qpair failed and we were unable to recover it. 00:22:23.606 [2024-05-15 01:09:35.696504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.696710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.696752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.606 qpair failed and we were unable to recover it. 00:22:23.606 [2024-05-15 01:09:35.696910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.697134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.697177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.606 qpair failed and we were unable to recover it. 00:22:23.606 [2024-05-15 01:09:35.697384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.697623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.697650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.606 qpair failed and we were unable to recover it. 00:22:23.606 [2024-05-15 01:09:35.697856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.698070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.698114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.606 qpair failed and we were unable to recover it. 00:22:23.606 [2024-05-15 01:09:35.698358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.698702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.698754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.606 qpair failed and we were unable to recover it. 00:22:23.606 [2024-05-15 01:09:35.698954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.699135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.699179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.606 qpair failed and we were unable to recover it. 00:22:23.606 [2024-05-15 01:09:35.699381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.699583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.699626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.606 qpair failed and we were unable to recover it. 00:22:23.606 [2024-05-15 01:09:35.699820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.700004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.700047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.606 qpair failed and we were unable to recover it. 00:22:23.606 [2024-05-15 01:09:35.700260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.700482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.700525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.606 qpair failed and we were unable to recover it. 00:22:23.606 [2024-05-15 01:09:35.700705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.700885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.700910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.606 qpair failed and we were unable to recover it. 00:22:23.606 [2024-05-15 01:09:35.701207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.701453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.701484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.606 qpair failed and we were unable to recover it. 00:22:23.606 [2024-05-15 01:09:35.701701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.701879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.701907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.606 qpair failed and we were unable to recover it. 00:22:23.606 [2024-05-15 01:09:35.702096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.702279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.702307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.606 qpair failed and we were unable to recover it. 00:22:23.606 [2024-05-15 01:09:35.702592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.702873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.702901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:23.606 qpair failed and we were unable to recover it. 00:22:23.606 [2024-05-15 01:09:35.703146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.703366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.703394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.606 qpair failed and we were unable to recover it. 00:22:23.606 [2024-05-15 01:09:35.703595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.606 [2024-05-15 01:09:35.703839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.703885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.607 qpair failed and we were unable to recover it. 00:22:23.607 [2024-05-15 01:09:35.704094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.704290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.704333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.607 qpair failed and we were unable to recover it. 00:22:23.607 [2024-05-15 01:09:35.704557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.704787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.704813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.607 qpair failed and we were unable to recover it. 00:22:23.607 [2024-05-15 01:09:35.704997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.705205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.705247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.607 qpair failed and we were unable to recover it. 00:22:23.607 [2024-05-15 01:09:35.705423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.705653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.705694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.607 qpair failed and we were unable to recover it. 00:22:23.607 [2024-05-15 01:09:35.705852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.706047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.706091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.607 qpair failed and we were unable to recover it. 00:22:23.607 [2024-05-15 01:09:35.706286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.706496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.706540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.607 qpair failed and we were unable to recover it. 00:22:23.607 [2024-05-15 01:09:35.706719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.706904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.706934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.607 qpair failed and we were unable to recover it. 00:22:23.607 [2024-05-15 01:09:35.707127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.707327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.707370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.607 qpair failed and we were unable to recover it. 00:22:23.607 [2024-05-15 01:09:35.707583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.707797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.707822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.607 qpair failed and we were unable to recover it. 00:22:23.607 [2024-05-15 01:09:35.708014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.708253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.708281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.607 qpair failed and we were unable to recover it. 00:22:23.607 [2024-05-15 01:09:35.708485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.708767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.708827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.607 qpair failed and we were unable to recover it. 00:22:23.607 [2024-05-15 01:09:35.709036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.709245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.709288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.607 qpair failed and we were unable to recover it. 00:22:23.607 [2024-05-15 01:09:35.709479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.709725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.709752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.607 qpair failed and we were unable to recover it. 00:22:23.607 [2024-05-15 01:09:35.709912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.710137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.710181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.607 qpair failed and we were unable to recover it. 00:22:23.607 [2024-05-15 01:09:35.710360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.710656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.710707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.607 qpair failed and we were unable to recover it. 00:22:23.607 [2024-05-15 01:09:35.710895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.711063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.711089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.607 qpair failed and we were unable to recover it. 00:22:23.607 [2024-05-15 01:09:35.711273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.711481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.711524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.607 qpair failed and we were unable to recover it. 00:22:23.607 [2024-05-15 01:09:35.711740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.711947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.711973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.607 qpair failed and we were unable to recover it. 00:22:23.607 [2024-05-15 01:09:35.712194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.712431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.712474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.607 qpair failed and we were unable to recover it. 00:22:23.607 [2024-05-15 01:09:35.712691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.712892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.712917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.607 qpair failed and we were unable to recover it. 00:22:23.607 [2024-05-15 01:09:35.713121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.713336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.713380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.607 qpair failed and we were unable to recover it. 00:22:23.607 [2024-05-15 01:09:35.713611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.713791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.713816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.607 qpair failed and we were unable to recover it. 00:22:23.607 [2024-05-15 01:09:35.713973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.714182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.714225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.607 qpair failed and we were unable to recover it. 00:22:23.607 [2024-05-15 01:09:35.714407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.714616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.714660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.607 qpair failed and we were unable to recover it. 00:22:23.607 [2024-05-15 01:09:35.714825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.715035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.715079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.607 qpair failed and we were unable to recover it. 00:22:23.607 [2024-05-15 01:09:35.715264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.715551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.715604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.607 qpair failed and we were unable to recover it. 00:22:23.607 [2024-05-15 01:09:35.715798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.715984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.716013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.607 qpair failed and we were unable to recover it. 00:22:23.607 [2024-05-15 01:09:35.716229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.716405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.716432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.607 qpair failed and we were unable to recover it. 00:22:23.607 [2024-05-15 01:09:35.716629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.716848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.607 [2024-05-15 01:09:35.716873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.607 qpair failed and we were unable to recover it. 00:22:23.608 [2024-05-15 01:09:35.717050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.717258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.717299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.608 qpair failed and we were unable to recover it. 00:22:23.608 [2024-05-15 01:09:35.717488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.717668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.717693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.608 qpair failed and we were unable to recover it. 00:22:23.608 [2024-05-15 01:09:35.717862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.718060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.718104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.608 qpair failed and we were unable to recover it. 00:22:23.608 [2024-05-15 01:09:35.718287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.718494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.718536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.608 qpair failed and we were unable to recover it. 00:22:23.608 [2024-05-15 01:09:35.718727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.718914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.718944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.608 qpair failed and we were unable to recover it. 00:22:23.608 [2024-05-15 01:09:35.719164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.719388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.719435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.608 qpair failed and we were unable to recover it. 00:22:23.608 [2024-05-15 01:09:35.719658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.719837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.719864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.608 qpair failed and we were unable to recover it. 00:22:23.608 [2024-05-15 01:09:35.720079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.720288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.720332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.608 qpair failed and we were unable to recover it. 00:22:23.608 [2024-05-15 01:09:35.720544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.720736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.720762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.608 qpair failed and we were unable to recover it. 00:22:23.608 [2024-05-15 01:09:35.720961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.721149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.721184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.608 qpair failed and we were unable to recover it. 00:22:23.608 [2024-05-15 01:09:35.721390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.721603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.721646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.608 qpair failed and we were unable to recover it. 00:22:23.608 [2024-05-15 01:09:35.721813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.722018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.722070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.608 qpair failed and we were unable to recover it. 00:22:23.608 [2024-05-15 01:09:35.722249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.722508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.722551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.608 qpair failed and we were unable to recover it. 00:22:23.608 [2024-05-15 01:09:35.722741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.722940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.722966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.608 qpair failed and we were unable to recover it. 00:22:23.608 [2024-05-15 01:09:35.723179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.723408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.723450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.608 qpair failed and we were unable to recover it. 00:22:23.608 [2024-05-15 01:09:35.723625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.723804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.723829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.608 qpair failed and we were unable to recover it. 00:22:23.608 [2024-05-15 01:09:35.724043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.724271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.724313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.608 qpair failed and we were unable to recover it. 00:22:23.608 [2024-05-15 01:09:35.724531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.724744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.724770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.608 qpair failed and we were unable to recover it. 00:22:23.608 [2024-05-15 01:09:35.724939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.725155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.725180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.608 qpair failed and we were unable to recover it. 00:22:23.608 [2024-05-15 01:09:35.725393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.725623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.725669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.608 qpair failed and we were unable to recover it. 00:22:23.608 [2024-05-15 01:09:35.725870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.726037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.726064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.608 qpair failed and we were unable to recover it. 00:22:23.608 [2024-05-15 01:09:35.726285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.726482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.726525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.608 qpair failed and we were unable to recover it. 00:22:23.608 [2024-05-15 01:09:35.726714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.726901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.726927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.608 qpair failed and we were unable to recover it. 00:22:23.608 [2024-05-15 01:09:35.727134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.727331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.727376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.608 qpair failed and we were unable to recover it. 00:22:23.608 [2024-05-15 01:09:35.727561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.727807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.727833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.608 qpair failed and we were unable to recover it. 00:22:23.608 [2024-05-15 01:09:35.728053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.728256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.728298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.608 qpair failed and we were unable to recover it. 00:22:23.608 [2024-05-15 01:09:35.728489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.728704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.728730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.608 qpair failed and we were unable to recover it. 00:22:23.608 [2024-05-15 01:09:35.728899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.608 [2024-05-15 01:09:35.729124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.729168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.609 qpair failed and we were unable to recover it. 00:22:23.609 [2024-05-15 01:09:35.729375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.729602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.729645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.609 qpair failed and we were unable to recover it. 00:22:23.609 [2024-05-15 01:09:35.729807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.730044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.730096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.609 qpair failed and we were unable to recover it. 00:22:23.609 [2024-05-15 01:09:35.730286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.730546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.730587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.609 qpair failed and we were unable to recover it. 00:22:23.609 [2024-05-15 01:09:35.730779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.730936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.730962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.609 qpair failed and we were unable to recover it. 00:22:23.609 [2024-05-15 01:09:35.731128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.731323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.731366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.609 qpair failed and we were unable to recover it. 00:22:23.609 [2024-05-15 01:09:35.731591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.731792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.731817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.609 qpair failed and we were unable to recover it. 00:22:23.609 [2024-05-15 01:09:35.732033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.732281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.732324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.609 qpair failed and we were unable to recover it. 00:22:23.609 [2024-05-15 01:09:35.732506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.732744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.732772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.609 qpair failed and we were unable to recover it. 00:22:23.609 [2024-05-15 01:09:35.733024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.733323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.733374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.609 qpair failed and we were unable to recover it. 00:22:23.609 [2024-05-15 01:09:35.733586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.733794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.733819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.609 qpair failed and we were unable to recover it. 00:22:23.609 [2024-05-15 01:09:35.734004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.734237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.734279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.609 qpair failed and we were unable to recover it. 00:22:23.609 [2024-05-15 01:09:35.734469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.734672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.734719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.609 qpair failed and we were unable to recover it. 00:22:23.609 [2024-05-15 01:09:35.734946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.735123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.735165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.609 qpair failed and we were unable to recover it. 00:22:23.609 [2024-05-15 01:09:35.735413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.735703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.735745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.609 qpair failed and we were unable to recover it. 00:22:23.609 [2024-05-15 01:09:35.735904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.736067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.736092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.609 qpair failed and we were unable to recover it. 00:22:23.609 [2024-05-15 01:09:35.736276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.736506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.736534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.609 qpair failed and we were unable to recover it. 00:22:23.609 [2024-05-15 01:09:35.736766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.737009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.737038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.609 qpair failed and we were unable to recover it. 00:22:23.609 [2024-05-15 01:09:35.737269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.737493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.737535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.609 qpair failed and we were unable to recover it. 00:22:23.609 [2024-05-15 01:09:35.737728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.737915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.737948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.609 qpair failed and we were unable to recover it. 00:22:23.609 [2024-05-15 01:09:35.738158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.738399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.738426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.609 qpair failed and we were unable to recover it. 00:22:23.609 [2024-05-15 01:09:35.738635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.738833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.738857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.609 qpair failed and we were unable to recover it. 00:22:23.609 [2024-05-15 01:09:35.739107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.739371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.739412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.609 qpair failed and we were unable to recover it. 00:22:23.609 [2024-05-15 01:09:35.739661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.739836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.739861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.609 qpair failed and we were unable to recover it. 00:22:23.609 [2024-05-15 01:09:35.740095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.740350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.740392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.609 qpair failed and we were unable to recover it. 00:22:23.609 [2024-05-15 01:09:35.740582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.740784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.740811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.609 qpair failed and we were unable to recover it. 00:22:23.609 [2024-05-15 01:09:35.741023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.741284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.609 [2024-05-15 01:09:35.741326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.609 qpair failed and we were unable to recover it. 00:22:23.610 [2024-05-15 01:09:35.741542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.741752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.741777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.610 qpair failed and we were unable to recover it. 00:22:23.610 [2024-05-15 01:09:35.741980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.742223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.742250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.610 qpair failed and we were unable to recover it. 00:22:23.610 [2024-05-15 01:09:35.742464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.742671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.742697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.610 qpair failed and we were unable to recover it. 00:22:23.610 [2024-05-15 01:09:35.742857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.743069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.743114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.610 qpair failed and we were unable to recover it. 00:22:23.610 [2024-05-15 01:09:35.743336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.743602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.743628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.610 qpair failed and we were unable to recover it. 00:22:23.610 [2024-05-15 01:09:35.743818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.744028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.744072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.610 qpair failed and we were unable to recover it. 00:22:23.610 [2024-05-15 01:09:35.744299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.744688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.744742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.610 qpair failed and we were unable to recover it. 00:22:23.610 [2024-05-15 01:09:35.744948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.745138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.745163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.610 qpair failed and we were unable to recover it. 00:22:23.610 [2024-05-15 01:09:35.745380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.745642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.745685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.610 qpair failed and we were unable to recover it. 00:22:23.610 [2024-05-15 01:09:35.745848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.746009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.746035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.610 qpair failed and we were unable to recover it. 00:22:23.610 [2024-05-15 01:09:35.746248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.746453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.746495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.610 qpair failed and we were unable to recover it. 00:22:23.610 [2024-05-15 01:09:35.746652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.746841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.746867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.610 qpair failed and we were unable to recover it. 00:22:23.610 [2024-05-15 01:09:35.747045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.747275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.747317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.610 qpair failed and we were unable to recover it. 00:22:23.610 [2024-05-15 01:09:35.747566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.747747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.747774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.610 qpair failed and we were unable to recover it. 00:22:23.610 [2024-05-15 01:09:35.747986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.748191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.748236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.610 qpair failed and we were unable to recover it. 00:22:23.610 [2024-05-15 01:09:35.748480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.748817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.748881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.610 qpair failed and we were unable to recover it. 00:22:23.610 [2024-05-15 01:09:35.749127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.749353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.749395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.610 qpair failed and we were unable to recover it. 00:22:23.610 [2024-05-15 01:09:35.749617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.749824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.749849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.610 qpair failed and we were unable to recover it. 00:22:23.610 [2024-05-15 01:09:35.750071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.750314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.750368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.610 qpair failed and we were unable to recover it. 00:22:23.610 [2024-05-15 01:09:35.750582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.750783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.750809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.610 qpair failed and we were unable to recover it. 00:22:23.610 [2024-05-15 01:09:35.751043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.751425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.751477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.610 qpair failed and we were unable to recover it. 00:22:23.610 [2024-05-15 01:09:35.751727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.751918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.610 [2024-05-15 01:09:35.751948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.610 qpair failed and we were unable to recover it. 00:22:23.611 [2024-05-15 01:09:35.752160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.752465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.752527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.611 qpair failed and we were unable to recover it. 00:22:23.611 [2024-05-15 01:09:35.752741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.752950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.752976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.611 qpair failed and we were unable to recover it. 00:22:23.611 [2024-05-15 01:09:35.753197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.753449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.753491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.611 qpair failed and we were unable to recover it. 00:22:23.611 [2024-05-15 01:09:35.753734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.753944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.753970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.611 qpair failed and we were unable to recover it. 00:22:23.611 [2024-05-15 01:09:35.754163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.754420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.754463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.611 qpair failed and we were unable to recover it. 00:22:23.611 [2024-05-15 01:09:35.754668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.754848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.754873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.611 qpair failed and we were unable to recover it. 00:22:23.611 [2024-05-15 01:09:35.755149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.755370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.755415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.611 qpair failed and we were unable to recover it. 00:22:23.611 [2024-05-15 01:09:35.755630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.755828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.755853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.611 qpair failed and we were unable to recover it. 00:22:23.611 [2024-05-15 01:09:35.756044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.756259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.756302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.611 qpair failed and we were unable to recover it. 00:22:23.611 [2024-05-15 01:09:35.756525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.756755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.756797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.611 qpair failed and we were unable to recover it. 00:22:23.611 [2024-05-15 01:09:35.756995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.757218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.757245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.611 qpair failed and we were unable to recover it. 00:22:23.611 [2024-05-15 01:09:35.757441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.757700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.757742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.611 qpair failed and we were unable to recover it. 00:22:23.611 [2024-05-15 01:09:35.757905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.758132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.758175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.611 qpair failed and we were unable to recover it. 00:22:23.611 [2024-05-15 01:09:35.758364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.758602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.758645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.611 qpair failed and we were unable to recover it. 00:22:23.611 [2024-05-15 01:09:35.758847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.759058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.759100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.611 qpair failed and we were unable to recover it. 00:22:23.611 [2024-05-15 01:09:35.759314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.759569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.759610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.611 qpair failed and we were unable to recover it. 00:22:23.611 [2024-05-15 01:09:35.759827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.760013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.760039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.611 qpair failed and we were unable to recover it. 00:22:23.611 [2024-05-15 01:09:35.760197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.760380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.760422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.611 qpair failed and we were unable to recover it. 00:22:23.611 [2024-05-15 01:09:35.760643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.760824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.760849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.611 qpair failed and we were unable to recover it. 00:22:23.611 [2024-05-15 01:09:35.761030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.761275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.761317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.611 qpair failed and we were unable to recover it. 00:22:23.611 [2024-05-15 01:09:35.761531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.761773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.761803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.611 qpair failed and we were unable to recover it. 00:22:23.611 [2024-05-15 01:09:35.762072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.762334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.762375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.611 qpair failed and we were unable to recover it. 00:22:23.611 [2024-05-15 01:09:35.762571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.762780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.762805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.611 qpair failed and we were unable to recover it. 00:22:23.611 [2024-05-15 01:09:35.763021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.763229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.763271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.611 qpair failed and we were unable to recover it. 00:22:23.611 [2024-05-15 01:09:35.763469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.763759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.763804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.611 qpair failed and we were unable to recover it. 00:22:23.611 [2024-05-15 01:09:35.763993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.764213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.764256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.611 qpair failed and we were unable to recover it. 00:22:23.611 [2024-05-15 01:09:35.764473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.764747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.764789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.611 qpair failed and we were unable to recover it. 00:22:23.611 [2024-05-15 01:09:35.765049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.765315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.611 [2024-05-15 01:09:35.765358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.611 qpair failed and we were unable to recover it. 00:22:23.612 [2024-05-15 01:09:35.765614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.765827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.765851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.612 qpair failed and we were unable to recover it. 00:22:23.612 [2024-05-15 01:09:35.766114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.766487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.766545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.612 qpair failed and we were unable to recover it. 00:22:23.612 [2024-05-15 01:09:35.766771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.767081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.767136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.612 qpair failed and we were unable to recover it. 00:22:23.612 [2024-05-15 01:09:35.767344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.767640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.767693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.612 qpair failed and we were unable to recover it. 00:22:23.612 [2024-05-15 01:09:35.767852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.768072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.768116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.612 qpair failed and we were unable to recover it. 00:22:23.612 [2024-05-15 01:09:35.768337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.768629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.768672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.612 qpair failed and we were unable to recover it. 00:22:23.612 [2024-05-15 01:09:35.768861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.769069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.769113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.612 qpair failed and we were unable to recover it. 00:22:23.612 [2024-05-15 01:09:35.769328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.769589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.769641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.612 qpair failed and we were unable to recover it. 00:22:23.612 [2024-05-15 01:09:35.769829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.770087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.770136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.612 qpair failed and we were unable to recover it. 00:22:23.612 [2024-05-15 01:09:35.770359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.770585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.770627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.612 qpair failed and we were unable to recover it. 00:22:23.612 [2024-05-15 01:09:35.770843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.771023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.771067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.612 qpair failed and we were unable to recover it. 00:22:23.612 [2024-05-15 01:09:35.771252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.771478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.771520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.612 qpair failed and we were unable to recover it. 00:22:23.612 [2024-05-15 01:09:35.771757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.771938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.771964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.612 qpair failed and we were unable to recover it. 00:22:23.612 [2024-05-15 01:09:35.772154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.772398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.772441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.612 qpair failed and we were unable to recover it. 00:22:23.612 [2024-05-15 01:09:35.772658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.772889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.772914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.612 qpair failed and we were unable to recover it. 00:22:23.612 [2024-05-15 01:09:35.773111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.773349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.773391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.612 qpair failed and we were unable to recover it. 00:22:23.612 [2024-05-15 01:09:35.773614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.773799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.773826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.612 qpair failed and we were unable to recover it. 00:22:23.612 [2024-05-15 01:09:35.774008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.774244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.774287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.612 qpair failed and we were unable to recover it. 00:22:23.612 [2024-05-15 01:09:35.774510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.774860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.774909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.612 qpair failed and we were unable to recover it. 00:22:23.612 [2024-05-15 01:09:35.775129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.775346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.775393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.612 qpair failed and we were unable to recover it. 00:22:23.612 [2024-05-15 01:09:35.775560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.775777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.775801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.612 qpair failed and we were unable to recover it. 00:22:23.612 [2024-05-15 01:09:35.775969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.776132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.776167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.612 qpair failed and we were unable to recover it. 00:22:23.612 [2024-05-15 01:09:35.776383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.776641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.776683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.612 qpair failed and we were unable to recover it. 00:22:23.612 [2024-05-15 01:09:35.776908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.777131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.777176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.612 qpair failed and we were unable to recover it. 00:22:23.612 [2024-05-15 01:09:35.777366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.777718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.777761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.612 qpair failed and we were unable to recover it. 00:22:23.612 [2024-05-15 01:09:35.777952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.778196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.778224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.612 qpair failed and we were unable to recover it. 00:22:23.612 [2024-05-15 01:09:35.778489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.778773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.778843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.612 qpair failed and we were unable to recover it. 00:22:23.612 [2024-05-15 01:09:35.779032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.779221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.779263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.612 qpair failed and we were unable to recover it. 00:22:23.612 [2024-05-15 01:09:35.779455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.779702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.779728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.612 qpair failed and we were unable to recover it. 00:22:23.612 [2024-05-15 01:09:35.779883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.612 [2024-05-15 01:09:35.780074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.780116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.613 qpair failed and we were unable to recover it. 00:22:23.613 [2024-05-15 01:09:35.780365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.780562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.780604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.613 qpair failed and we were unable to recover it. 00:22:23.613 [2024-05-15 01:09:35.780756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.780957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.780985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.613 qpair failed and we were unable to recover it. 00:22:23.613 [2024-05-15 01:09:35.781174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.781391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.781419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.613 qpair failed and we were unable to recover it. 00:22:23.613 [2024-05-15 01:09:35.781651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.781852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.781876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.613 qpair failed and we were unable to recover it. 00:22:23.613 [2024-05-15 01:09:35.782068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.782298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.782325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.613 qpair failed and we were unable to recover it. 00:22:23.613 [2024-05-15 01:09:35.782535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.782757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.782801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.613 qpair failed and we were unable to recover it. 00:22:23.613 [2024-05-15 01:09:35.782972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.783214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.783257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.613 qpair failed and we were unable to recover it. 00:22:23.613 [2024-05-15 01:09:35.783471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.783737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.783780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.613 qpair failed and we were unable to recover it. 00:22:23.613 [2024-05-15 01:09:35.783993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.784271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.784321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.613 qpair failed and we were unable to recover it. 00:22:23.613 [2024-05-15 01:09:35.784532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.784765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.784791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.613 qpair failed and we were unable to recover it. 00:22:23.613 [2024-05-15 01:09:35.784995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.785267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.785315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.613 qpair failed and we were unable to recover it. 00:22:23.613 [2024-05-15 01:09:35.785539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.785772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.785797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.613 qpair failed and we were unable to recover it. 00:22:23.613 [2024-05-15 01:09:35.786014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.786218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.786261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.613 qpair failed and we were unable to recover it. 00:22:23.613 [2024-05-15 01:09:35.786511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.786765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.786808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.613 qpair failed and we were unable to recover it. 00:22:23.613 [2024-05-15 01:09:35.787012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.787210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.787254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.613 qpair failed and we were unable to recover it. 00:22:23.613 [2024-05-15 01:09:35.787471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.787698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.787740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.613 qpair failed and we were unable to recover it. 00:22:23.613 [2024-05-15 01:09:35.787941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.788136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.788179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.613 qpair failed and we were unable to recover it. 00:22:23.613 [2024-05-15 01:09:35.788423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.788726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.788775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.613 qpair failed and we were unable to recover it. 00:22:23.613 [2024-05-15 01:09:35.788993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.789217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.789260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.613 qpair failed and we were unable to recover it. 00:22:23.613 [2024-05-15 01:09:35.789481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.789742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.789777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.613 qpair failed and we were unable to recover it. 00:22:23.613 [2024-05-15 01:09:35.789972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.790166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.790208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.613 qpair failed and we were unable to recover it. 00:22:23.613 [2024-05-15 01:09:35.790416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.790641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.790683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.613 qpair failed and we were unable to recover it. 00:22:23.613 [2024-05-15 01:09:35.790896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.791093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.791118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.613 qpair failed and we were unable to recover it. 00:22:23.613 [2024-05-15 01:09:35.791332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.791567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.791608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.613 qpair failed and we were unable to recover it. 00:22:23.613 [2024-05-15 01:09:35.791798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.791998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.792026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.613 qpair failed and we were unable to recover it. 00:22:23.613 [2024-05-15 01:09:35.792231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.792493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.792535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.613 qpair failed and we were unable to recover it. 00:22:23.613 [2024-05-15 01:09:35.792705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.792899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.792924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.613 qpair failed and we were unable to recover it. 00:22:23.613 [2024-05-15 01:09:35.793142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.793365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.793408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.613 qpair failed and we were unable to recover it. 00:22:23.613 [2024-05-15 01:09:35.793653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.793871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.793895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.613 qpair failed and we were unable to recover it. 00:22:23.613 [2024-05-15 01:09:35.794094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.613 [2024-05-15 01:09:35.794338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.794380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.614 qpair failed and we were unable to recover it. 00:22:23.614 [2024-05-15 01:09:35.794622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.794792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.794816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.614 qpair failed and we were unable to recover it. 00:22:23.614 [2024-05-15 01:09:35.795035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.795254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.795298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.614 qpair failed and we were unable to recover it. 00:22:23.614 [2024-05-15 01:09:35.795478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.795687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.795712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.614 qpair failed and we were unable to recover it. 00:22:23.614 [2024-05-15 01:09:35.795904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.796154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.796196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.614 qpair failed and we were unable to recover it. 00:22:23.614 [2024-05-15 01:09:35.796409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.796829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.796879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.614 qpair failed and we were unable to recover it. 00:22:23.614 [2024-05-15 01:09:35.797065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.797300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.797342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.614 qpair failed and we were unable to recover it. 00:22:23.614 [2024-05-15 01:09:35.797557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.797768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.797798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.614 qpair failed and we were unable to recover it. 00:22:23.614 [2024-05-15 01:09:35.798001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.798239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.798267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.614 qpair failed and we were unable to recover it. 00:22:23.614 [2024-05-15 01:09:35.798529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.798740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.798765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.614 qpair failed and we were unable to recover it. 00:22:23.614 [2024-05-15 01:09:35.798963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.799146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.799191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.614 qpair failed and we were unable to recover it. 00:22:23.614 [2024-05-15 01:09:35.799401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.799635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.799677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.614 qpair failed and we were unable to recover it. 00:22:23.614 [2024-05-15 01:09:35.799982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.800174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.800216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.614 qpair failed and we were unable to recover it. 00:22:23.614 [2024-05-15 01:09:35.800478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.800806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.800867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.614 qpair failed and we were unable to recover it. 00:22:23.614 [2024-05-15 01:09:35.801074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.801295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.801338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.614 qpair failed and we were unable to recover it. 00:22:23.614 [2024-05-15 01:09:35.801558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.801852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.801894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.614 qpair failed and we were unable to recover it. 00:22:23.614 [2024-05-15 01:09:35.802101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.802291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.802333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.614 qpair failed and we were unable to recover it. 00:22:23.614 [2024-05-15 01:09:35.802547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.802784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.802832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.614 qpair failed and we were unable to recover it. 00:22:23.614 [2024-05-15 01:09:35.803025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.803213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.803255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.614 qpair failed and we were unable to recover it. 00:22:23.614 [2024-05-15 01:09:35.803474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.803687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.803728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.614 qpair failed and we were unable to recover it. 00:22:23.614 [2024-05-15 01:09:35.803916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.804151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.804194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.614 qpair failed and we were unable to recover it. 00:22:23.614 [2024-05-15 01:09:35.804400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.804661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.804703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.614 qpair failed and we were unable to recover it. 00:22:23.614 [2024-05-15 01:09:35.804908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.805137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.805162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.614 qpair failed and we were unable to recover it. 00:22:23.614 [2024-05-15 01:09:35.805381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.805715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.805763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.614 qpair failed and we were unable to recover it. 00:22:23.614 [2024-05-15 01:09:35.805934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.806128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.806153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.614 qpair failed and we were unable to recover it. 00:22:23.614 [2024-05-15 01:09:35.806331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.806609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.806651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.614 qpair failed and we were unable to recover it. 00:22:23.614 [2024-05-15 01:09:35.806861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.807109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.807142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.614 qpair failed and we were unable to recover it. 00:22:23.614 [2024-05-15 01:09:35.807326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.807545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.807593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.614 qpair failed and we were unable to recover it. 00:22:23.614 [2024-05-15 01:09:35.807852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.808025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.808051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.614 qpair failed and we were unable to recover it. 00:22:23.614 [2024-05-15 01:09:35.808259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.808498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.614 [2024-05-15 01:09:35.808541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.614 qpair failed and we were unable to recover it. 00:22:23.614 [2024-05-15 01:09:35.808724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.808922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.808962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.615 qpair failed and we were unable to recover it. 00:22:23.615 [2024-05-15 01:09:35.809131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.809352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.809397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.615 qpair failed and we were unable to recover it. 00:22:23.615 [2024-05-15 01:09:35.809596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.809829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.809854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.615 qpair failed and we were unable to recover it. 00:22:23.615 [2024-05-15 01:09:35.810092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.810424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.810474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.615 qpair failed and we were unable to recover it. 00:22:23.615 [2024-05-15 01:09:35.810721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.810902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.810927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.615 qpair failed and we were unable to recover it. 00:22:23.615 [2024-05-15 01:09:35.811159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.811435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.811477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.615 qpair failed and we were unable to recover it. 00:22:23.615 [2024-05-15 01:09:35.811693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.811871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.811897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.615 qpair failed and we were unable to recover it. 00:22:23.615 [2024-05-15 01:09:35.812089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.812326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.812372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.615 qpair failed and we were unable to recover it. 00:22:23.615 [2024-05-15 01:09:35.812566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.812770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.812795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.615 qpair failed and we were unable to recover it. 00:22:23.615 [2024-05-15 01:09:35.813032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.813302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.813344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.615 qpair failed and we were unable to recover it. 00:22:23.615 [2024-05-15 01:09:35.813562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.813792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.813817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.615 qpair failed and we were unable to recover it. 00:22:23.615 [2024-05-15 01:09:35.814034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.814231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.814273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.615 qpair failed and we were unable to recover it. 00:22:23.615 [2024-05-15 01:09:35.814488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.814685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.814726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.615 qpair failed and we were unable to recover it. 00:22:23.615 [2024-05-15 01:09:35.814944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.815154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.815196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.615 qpair failed and we were unable to recover it. 00:22:23.615 [2024-05-15 01:09:35.815421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.815649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.815690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.615 qpair failed and we were unable to recover it. 00:22:23.615 [2024-05-15 01:09:35.815908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.816102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.816127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.615 qpair failed and we were unable to recover it. 00:22:23.615 [2024-05-15 01:09:35.816320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.816510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.816552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.615 qpair failed and we were unable to recover it. 00:22:23.615 [2024-05-15 01:09:35.816764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.816968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.816994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.615 qpair failed and we were unable to recover it. 00:22:23.615 [2024-05-15 01:09:35.817215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.817415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.817456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.615 qpair failed and we were unable to recover it. 00:22:23.615 [2024-05-15 01:09:35.817708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.817891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.817916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.615 qpair failed and we were unable to recover it. 00:22:23.615 [2024-05-15 01:09:35.818116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.818359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.818400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.615 qpair failed and we were unable to recover it. 00:22:23.615 [2024-05-15 01:09:35.818592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.818796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.818821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.615 qpair failed and we were unable to recover it. 00:22:23.615 [2024-05-15 01:09:35.819062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.819321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.819362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.615 qpair failed and we were unable to recover it. 00:22:23.615 [2024-05-15 01:09:35.819577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.819754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.819782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.615 qpair failed and we were unable to recover it. 00:22:23.615 [2024-05-15 01:09:35.820014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.820213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-05-15 01:09:35.820255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.615 qpair failed and we were unable to recover it. 00:22:23.615 [2024-05-15 01:09:35.820476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.820704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.820747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.616 qpair failed and we were unable to recover it. 00:22:23.616 [2024-05-15 01:09:35.820954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.821168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.821211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.616 qpair failed and we were unable to recover it. 00:22:23.616 [2024-05-15 01:09:35.821432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.821666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.821696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.616 qpair failed and we were unable to recover it. 00:22:23.616 [2024-05-15 01:09:35.821927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.822147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.822172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.616 qpair failed and we were unable to recover it. 00:22:23.616 [2024-05-15 01:09:35.822383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.822585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.822626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.616 qpair failed and we were unable to recover it. 00:22:23.616 [2024-05-15 01:09:35.822826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.823011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.823038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.616 qpair failed and we were unable to recover it. 00:22:23.616 [2024-05-15 01:09:35.823245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.823477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.823520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.616 qpair failed and we were unable to recover it. 00:22:23.616 [2024-05-15 01:09:35.823727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.823967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.823993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.616 qpair failed and we were unable to recover it. 00:22:23.616 [2024-05-15 01:09:35.824179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.824439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.824482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.616 qpair failed and we were unable to recover it. 00:22:23.616 [2024-05-15 01:09:35.824701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.824904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.824945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.616 qpair failed and we were unable to recover it. 00:22:23.616 [2024-05-15 01:09:35.825130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.825345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.825387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.616 qpair failed and we were unable to recover it. 00:22:23.616 [2024-05-15 01:09:35.825595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.825799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.825824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.616 qpair failed and we were unable to recover it. 00:22:23.616 [2024-05-15 01:09:35.826029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.826265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.826289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.616 qpair failed and we were unable to recover it. 00:22:23.616 [2024-05-15 01:09:35.826519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.826742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.826785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.616 qpair failed and we were unable to recover it. 00:22:23.616 [2024-05-15 01:09:35.826999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.827247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.827290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.616 qpair failed and we were unable to recover it. 00:22:23.616 [2024-05-15 01:09:35.827494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.827752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.827792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.616 qpair failed and we were unable to recover it. 00:22:23.616 [2024-05-15 01:09:35.828052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.828292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.828334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.616 qpair failed and we were unable to recover it. 00:22:23.616 [2024-05-15 01:09:35.828512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.828772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.828813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.616 qpair failed and we were unable to recover it. 00:22:23.616 [2024-05-15 01:09:35.829038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.829277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.829319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.616 qpair failed and we were unable to recover it. 00:22:23.616 [2024-05-15 01:09:35.829470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.829686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.829729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.616 qpair failed and we were unable to recover it. 00:22:23.616 [2024-05-15 01:09:35.829927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.830118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.830161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.616 qpair failed and we were unable to recover it. 00:22:23.616 [2024-05-15 01:09:35.830382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.830644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.830686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.616 qpair failed and we were unable to recover it. 00:22:23.616 [2024-05-15 01:09:35.830954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.831250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.831314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.616 qpair failed and we were unable to recover it. 00:22:23.616 [2024-05-15 01:09:35.831511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.831767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.831809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.616 qpair failed and we were unable to recover it. 00:22:23.616 [2024-05-15 01:09:35.831999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.832262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.832304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.616 qpair failed and we were unable to recover it. 00:22:23.616 [2024-05-15 01:09:35.832519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.832723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.832763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.616 qpair failed and we were unable to recover it. 00:22:23.616 [2024-05-15 01:09:35.832951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.833189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.833230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.616 qpair failed and we were unable to recover it. 00:22:23.616 [2024-05-15 01:09:35.833440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.833674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.833702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.616 qpair failed and we were unable to recover it. 00:22:23.616 [2024-05-15 01:09:35.833923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.834142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.834191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.616 qpair failed and we were unable to recover it. 00:22:23.616 [2024-05-15 01:09:35.834414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.834749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.616 [2024-05-15 01:09:35.834789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.616 qpair failed and we were unable to recover it. 00:22:23.617 [2024-05-15 01:09:35.834964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.835192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.835234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.617 qpair failed and we were unable to recover it. 00:22:23.617 [2024-05-15 01:09:35.835577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.835801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.835827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.617 qpair failed and we were unable to recover it. 00:22:23.617 [2024-05-15 01:09:35.836022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.836259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.836302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.617 qpair failed and we were unable to recover it. 00:22:23.617 [2024-05-15 01:09:35.836575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.836794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.836818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.617 qpair failed and we were unable to recover it. 00:22:23.617 [2024-05-15 01:09:35.837110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.837545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.837598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.617 qpair failed and we were unable to recover it. 00:22:23.617 [2024-05-15 01:09:35.837796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.837998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.838038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.617 qpair failed and we were unable to recover it. 00:22:23.617 [2024-05-15 01:09:35.838265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.838591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.838633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.617 qpair failed and we were unable to recover it. 00:22:23.617 [2024-05-15 01:09:35.838864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.839055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.839081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.617 qpair failed and we were unable to recover it. 00:22:23.617 [2024-05-15 01:09:35.839267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.839492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.839534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.617 qpair failed and we were unable to recover it. 00:22:23.617 [2024-05-15 01:09:35.839790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.839969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.839996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.617 qpair failed and we were unable to recover it. 00:22:23.617 [2024-05-15 01:09:35.840212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.840612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.840667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.617 qpair failed and we were unable to recover it. 00:22:23.617 [2024-05-15 01:09:35.840862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.841079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.841122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.617 qpair failed and we were unable to recover it. 00:22:23.617 [2024-05-15 01:09:35.841456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.841865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.841941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.617 qpair failed and we were unable to recover it. 00:22:23.617 [2024-05-15 01:09:35.842152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.842381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.842423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.617 qpair failed and we were unable to recover it. 00:22:23.617 [2024-05-15 01:09:35.842644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.842838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.842863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.617 qpair failed and we were unable to recover it. 00:22:23.617 [2024-05-15 01:09:35.843126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.843386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.843428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.617 qpair failed and we were unable to recover it. 00:22:23.617 [2024-05-15 01:09:35.843682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.843875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.843900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.617 qpair failed and we were unable to recover it. 00:22:23.617 [2024-05-15 01:09:35.844113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.844320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.844361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.617 qpair failed and we were unable to recover it. 00:22:23.617 [2024-05-15 01:09:35.844622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.844862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.844902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.617 qpair failed and we were unable to recover it. 00:22:23.617 [2024-05-15 01:09:35.845115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.845324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.845367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.617 qpair failed and we were unable to recover it. 00:22:23.617 [2024-05-15 01:09:35.845555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.845765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.845791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.617 qpair failed and we were unable to recover it. 00:22:23.617 [2024-05-15 01:09:35.846026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.846289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.846330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.617 qpair failed and we were unable to recover it. 00:22:23.617 [2024-05-15 01:09:35.846584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.846801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.846827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.617 qpair failed and we were unable to recover it. 00:22:23.617 [2024-05-15 01:09:35.847057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.847404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.847455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.617 qpair failed and we were unable to recover it. 00:22:23.617 [2024-05-15 01:09:35.847695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.847893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.847936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.617 qpair failed and we were unable to recover it. 00:22:23.617 [2024-05-15 01:09:35.848158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.848453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.848494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.617 qpair failed and we were unable to recover it. 00:22:23.617 [2024-05-15 01:09:35.848753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.848962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.848988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.617 qpair failed and we were unable to recover it. 00:22:23.617 [2024-05-15 01:09:35.849187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.849416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.849458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.617 qpair failed and we were unable to recover it. 00:22:23.617 [2024-05-15 01:09:35.849636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.849868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.617 [2024-05-15 01:09:35.849893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.617 qpair failed and we were unable to recover it. 00:22:23.617 [2024-05-15 01:09:35.850117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.850370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.850412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.618 qpair failed and we were unable to recover it. 00:22:23.618 [2024-05-15 01:09:35.850666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.850869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.850894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.618 qpair failed and we were unable to recover it. 00:22:23.618 [2024-05-15 01:09:35.851118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.851414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.851475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.618 qpair failed and we were unable to recover it. 00:22:23.618 [2024-05-15 01:09:35.851695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.851927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.851959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.618 qpair failed and we were unable to recover it. 00:22:23.618 [2024-05-15 01:09:35.852134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.852349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.852392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.618 qpair failed and we were unable to recover it. 00:22:23.618 [2024-05-15 01:09:35.852583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.852790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.852817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.618 qpair failed and we were unable to recover it. 00:22:23.618 [2024-05-15 01:09:35.852999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.853230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.853273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.618 qpair failed and we were unable to recover it. 00:22:23.618 [2024-05-15 01:09:35.853517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.853756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.853784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.618 qpair failed and we were unable to recover it. 00:22:23.618 [2024-05-15 01:09:35.854012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.854246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.854288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.618 qpair failed and we were unable to recover it. 00:22:23.618 [2024-05-15 01:09:35.854477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.854699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.854742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.618 qpair failed and we were unable to recover it. 00:22:23.618 [2024-05-15 01:09:35.854935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.855147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.855192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.618 qpair failed and we were unable to recover it. 00:22:23.618 [2024-05-15 01:09:35.855414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.855670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.855713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.618 qpair failed and we were unable to recover it. 00:22:23.618 [2024-05-15 01:09:35.855899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.856071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.856098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.618 qpair failed and we were unable to recover it. 00:22:23.618 [2024-05-15 01:09:35.856307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.856569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.856611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.618 qpair failed and we were unable to recover it. 00:22:23.618 [2024-05-15 01:09:35.856802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.856990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.857017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.618 qpair failed and we were unable to recover it. 00:22:23.618 [2024-05-15 01:09:35.857230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.857428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.857470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.618 qpair failed and we were unable to recover it. 00:22:23.618 [2024-05-15 01:09:35.857683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.857864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.857889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.618 qpair failed and we were unable to recover it. 00:22:23.618 [2024-05-15 01:09:35.858137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.858429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.858481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.618 qpair failed and we were unable to recover it. 00:22:23.618 [2024-05-15 01:09:35.858689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.858893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.858917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.618 qpair failed and we were unable to recover it. 00:22:23.618 [2024-05-15 01:09:35.859134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.859396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.859437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.618 qpair failed and we were unable to recover it. 00:22:23.618 [2024-05-15 01:09:35.859656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.859835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.859859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.618 qpair failed and we were unable to recover it. 00:22:23.618 [2024-05-15 01:09:35.860062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.860279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.860321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.618 qpair failed and we were unable to recover it. 00:22:23.618 [2024-05-15 01:09:35.860548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.860912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.860977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.618 qpair failed and we were unable to recover it. 00:22:23.618 [2024-05-15 01:09:35.861204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.861474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.861521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.618 qpair failed and we were unable to recover it. 00:22:23.618 [2024-05-15 01:09:35.861760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.862008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.862035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.618 qpair failed and we were unable to recover it. 00:22:23.618 [2024-05-15 01:09:35.862228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.862576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.862623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.618 qpair failed and we were unable to recover it. 00:22:23.618 [2024-05-15 01:09:35.862833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.863087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.863131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.618 qpair failed and we were unable to recover it. 00:22:23.618 [2024-05-15 01:09:35.863346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.863693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.863741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.618 qpair failed and we were unable to recover it. 00:22:23.618 [2024-05-15 01:09:35.863905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.864126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.864151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.618 qpair failed and we were unable to recover it. 00:22:23.618 [2024-05-15 01:09:35.864394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.864618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.618 [2024-05-15 01:09:35.864662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.618 qpair failed and we were unable to recover it. 00:22:23.619 [2024-05-15 01:09:35.864848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.865002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.865028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.619 qpair failed and we were unable to recover it. 00:22:23.619 [2024-05-15 01:09:35.865223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.865539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.865597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.619 qpair failed and we were unable to recover it. 00:22:23.619 [2024-05-15 01:09:35.865787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.865998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.866041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.619 qpair failed and we were unable to recover it. 00:22:23.619 [2024-05-15 01:09:35.866227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.866605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.866661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.619 qpair failed and we were unable to recover it. 00:22:23.619 [2024-05-15 01:09:35.866863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.867059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.867084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.619 qpair failed and we were unable to recover it. 00:22:23.619 [2024-05-15 01:09:35.867301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.867527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.867569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.619 qpair failed and we were unable to recover it. 00:22:23.619 [2024-05-15 01:09:35.867786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.867983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.868013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.619 qpair failed and we were unable to recover it. 00:22:23.619 [2024-05-15 01:09:35.868277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.868674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.868744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.619 qpair failed and we were unable to recover it. 00:22:23.619 [2024-05-15 01:09:35.868937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.869128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.869153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.619 qpair failed and we were unable to recover it. 00:22:23.619 [2024-05-15 01:09:35.869369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.869603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.869645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.619 qpair failed and we were unable to recover it. 00:22:23.619 [2024-05-15 01:09:35.869800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.869960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.869987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.619 qpair failed and we were unable to recover it. 00:22:23.619 [2024-05-15 01:09:35.870301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.870656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.870713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.619 qpair failed and we were unable to recover it. 00:22:23.619 [2024-05-15 01:09:35.870878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.871127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.871171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.619 qpair failed and we were unable to recover it. 00:22:23.619 [2024-05-15 01:09:35.871453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.871682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.871724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.619 qpair failed and we were unable to recover it. 00:22:23.619 [2024-05-15 01:09:35.871909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.872117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.872142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.619 qpair failed and we were unable to recover it. 00:22:23.619 [2024-05-15 01:09:35.872359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.872560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.872602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.619 qpair failed and we were unable to recover it. 00:22:23.619 [2024-05-15 01:09:35.872815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.873050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.873076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.619 qpair failed and we were unable to recover it. 00:22:23.619 [2024-05-15 01:09:35.873270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.873567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.873610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.619 qpair failed and we were unable to recover it. 00:22:23.619 [2024-05-15 01:09:35.873808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.874010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.874053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.619 qpair failed and we were unable to recover it. 00:22:23.619 [2024-05-15 01:09:35.874303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.874498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.874540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.619 qpair failed and we were unable to recover it. 00:22:23.619 [2024-05-15 01:09:35.874747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.874981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.875007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.619 qpair failed and we were unable to recover it. 00:22:23.619 [2024-05-15 01:09:35.875224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.875510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.875558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.619 qpair failed and we were unable to recover it. 00:22:23.619 [2024-05-15 01:09:35.875743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.875958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.875983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.619 qpair failed and we were unable to recover it. 00:22:23.619 [2024-05-15 01:09:35.876209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.876418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.876460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.619 qpair failed and we were unable to recover it. 00:22:23.619 [2024-05-15 01:09:35.876679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.876935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.619 [2024-05-15 01:09:35.876980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.619 qpair failed and we were unable to recover it. 00:22:23.620 [2024-05-15 01:09:35.877184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.877391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.877433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.620 qpair failed and we were unable to recover it. 00:22:23.620 [2024-05-15 01:09:35.877654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.877853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.877878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.620 qpair failed and we were unable to recover it. 00:22:23.620 [2024-05-15 01:09:35.878069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.878264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.878292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.620 qpair failed and we were unable to recover it. 00:22:23.620 [2024-05-15 01:09:35.878547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.878767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.878809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.620 qpair failed and we were unable to recover it. 00:22:23.620 [2024-05-15 01:09:35.879001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.879212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.879254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.620 qpair failed and we were unable to recover it. 00:22:23.620 [2024-05-15 01:09:35.879500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.879726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.879771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.620 qpair failed and we were unable to recover it. 00:22:23.620 [2024-05-15 01:09:35.879999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.880213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.880256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.620 qpair failed and we were unable to recover it. 00:22:23.620 [2024-05-15 01:09:35.880513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.880877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.880950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.620 qpair failed and we were unable to recover it. 00:22:23.620 [2024-05-15 01:09:35.881188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.881622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.881671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.620 qpair failed and we were unable to recover it. 00:22:23.620 [2024-05-15 01:09:35.881890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.882059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.882089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.620 qpair failed and we were unable to recover it. 00:22:23.620 [2024-05-15 01:09:35.882295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.882522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.882564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.620 qpair failed and we were unable to recover it. 00:22:23.620 [2024-05-15 01:09:35.882774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.883031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.883075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.620 qpair failed and we were unable to recover it. 00:22:23.620 [2024-05-15 01:09:35.883316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.883572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.883616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.620 qpair failed and we were unable to recover it. 00:22:23.620 [2024-05-15 01:09:35.883793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.883986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.884015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.620 qpair failed and we were unable to recover it. 00:22:23.620 [2024-05-15 01:09:35.884239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.884510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.884552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.620 qpair failed and we were unable to recover it. 00:22:23.620 [2024-05-15 01:09:35.884771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.884943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.884969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.620 qpair failed and we were unable to recover it. 00:22:23.620 [2024-05-15 01:09:35.885221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.885522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.885577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.620 qpair failed and we were unable to recover it. 00:22:23.620 [2024-05-15 01:09:35.885810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.885988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.886016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.620 qpair failed and we were unable to recover it. 00:22:23.620 [2024-05-15 01:09:35.886278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.886656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.886715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.620 qpair failed and we were unable to recover it. 00:22:23.620 [2024-05-15 01:09:35.886903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.887172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.887221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.620 qpair failed and we were unable to recover it. 00:22:23.620 [2024-05-15 01:09:35.887479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.887679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.887720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.620 qpair failed and we were unable to recover it. 00:22:23.620 [2024-05-15 01:09:35.887878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.888085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.888111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.620 qpair failed and we were unable to recover it. 00:22:23.620 [2024-05-15 01:09:35.888296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.888500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.888542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.620 qpair failed and we were unable to recover it. 00:22:23.620 [2024-05-15 01:09:35.888759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.888962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.888988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.620 qpair failed and we were unable to recover it. 00:22:23.620 [2024-05-15 01:09:35.889173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.889438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.889482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.620 qpair failed and we were unable to recover it. 00:22:23.620 [2024-05-15 01:09:35.889725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.889916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.889947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.620 qpair failed and we were unable to recover it. 00:22:23.620 [2024-05-15 01:09:35.890163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.890372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.890413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.620 qpair failed and we were unable to recover it. 00:22:23.620 [2024-05-15 01:09:35.890626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.890859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.890884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.620 qpair failed and we were unable to recover it. 00:22:23.620 [2024-05-15 01:09:35.891095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.891322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.891365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.620 qpair failed and we were unable to recover it. 00:22:23.620 [2024-05-15 01:09:35.891626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.620 [2024-05-15 01:09:35.891800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.891828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.621 qpair failed and we were unable to recover it. 00:22:23.621 [2024-05-15 01:09:35.892083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.892309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.892352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.621 qpair failed and we were unable to recover it. 00:22:23.621 [2024-05-15 01:09:35.892568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.892761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.892785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.621 qpair failed and we were unable to recover it. 00:22:23.621 [2024-05-15 01:09:35.893012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.893241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.893283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.621 qpair failed and we were unable to recover it. 00:22:23.621 [2024-05-15 01:09:35.893525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.893782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.893833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.621 qpair failed and we were unable to recover it. 00:22:23.621 [2024-05-15 01:09:35.894060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.894272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.894316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.621 qpair failed and we were unable to recover it. 00:22:23.621 [2024-05-15 01:09:35.894553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.894768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.894793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.621 qpair failed and we were unable to recover it. 00:22:23.621 [2024-05-15 01:09:35.894989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.895358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.895407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.621 qpair failed and we were unable to recover it. 00:22:23.621 [2024-05-15 01:09:35.895625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.895851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.895876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.621 qpair failed and we were unable to recover it. 00:22:23.621 [2024-05-15 01:09:35.896048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.896245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.896286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.621 qpair failed and we were unable to recover it. 00:22:23.621 [2024-05-15 01:09:35.896480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.896685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.896726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.621 qpair failed and we were unable to recover it. 00:22:23.621 [2024-05-15 01:09:35.896885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.897084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.897110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.621 qpair failed and we were unable to recover it. 00:22:23.621 [2024-05-15 01:09:35.897334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.897583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.897625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.621 qpair failed and we were unable to recover it. 00:22:23.621 [2024-05-15 01:09:35.897845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.898019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.898063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.621 qpair failed and we were unable to recover it. 00:22:23.621 [2024-05-15 01:09:35.898256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.898596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.898645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.621 qpair failed and we were unable to recover it. 00:22:23.621 [2024-05-15 01:09:35.898870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.899053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.899098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.621 qpair failed and we were unable to recover it. 00:22:23.621 [2024-05-15 01:09:35.899309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.899543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.899585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.621 qpair failed and we were unable to recover it. 00:22:23.621 [2024-05-15 01:09:35.899811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.899976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.900003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.621 qpair failed and we were unable to recover it. 00:22:23.621 [2024-05-15 01:09:35.900210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.900461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.900517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.621 qpair failed and we were unable to recover it. 00:22:23.621 [2024-05-15 01:09:35.900796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.901016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.901043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.621 qpair failed and we were unable to recover it. 00:22:23.621 [2024-05-15 01:09:35.901234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.901439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.901484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.621 qpair failed and we were unable to recover it. 00:22:23.621 [2024-05-15 01:09:35.901716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.901946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.901972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.621 qpair failed and we were unable to recover it. 00:22:23.621 [2024-05-15 01:09:35.902173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.902413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.902456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.621 qpair failed and we were unable to recover it. 00:22:23.621 [2024-05-15 01:09:35.902664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.902894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.902919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.621 qpair failed and we were unable to recover it. 00:22:23.621 [2024-05-15 01:09:35.903116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.903334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.903378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.621 qpair failed and we were unable to recover it. 00:22:23.621 [2024-05-15 01:09:35.903598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.903785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.903810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.621 qpair failed and we were unable to recover it. 00:22:23.621 [2024-05-15 01:09:35.904016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.904257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.904301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.621 qpair failed and we were unable to recover it. 00:22:23.621 [2024-05-15 01:09:35.904526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.904713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.904739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.621 qpair failed and we were unable to recover it. 00:22:23.621 [2024-05-15 01:09:35.904910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.905142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.905186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.621 qpair failed and we were unable to recover it. 00:22:23.621 [2024-05-15 01:09:35.905405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.905641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.621 [2024-05-15 01:09:35.905669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.621 qpair failed and we were unable to recover it. 00:22:23.622 [2024-05-15 01:09:35.905883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.906071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.906096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.622 qpair failed and we were unable to recover it. 00:22:23.622 [2024-05-15 01:09:35.906289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.906528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.906571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.622 qpair failed and we were unable to recover it. 00:22:23.622 [2024-05-15 01:09:35.906734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.906923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.906953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.622 qpair failed and we were unable to recover it. 00:22:23.622 [2024-05-15 01:09:35.907172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.907409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.907452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.622 qpair failed and we were unable to recover it. 00:22:23.622 [2024-05-15 01:09:35.907667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.907849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.907874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.622 qpair failed and we were unable to recover it. 00:22:23.622 [2024-05-15 01:09:35.908036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.908258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.908301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.622 qpair failed and we were unable to recover it. 00:22:23.622 [2024-05-15 01:09:35.908517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.908742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.908784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.622 qpair failed and we were unable to recover it. 00:22:23.622 [2024-05-15 01:09:35.908953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.909178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.909221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.622 qpair failed and we were unable to recover it. 00:22:23.622 [2024-05-15 01:09:35.909410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.909620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.909662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.622 qpair failed and we were unable to recover it. 00:22:23.622 [2024-05-15 01:09:35.909879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.910066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.910109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.622 qpair failed and we were unable to recover it. 00:22:23.622 [2024-05-15 01:09:35.910300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.910531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.910573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.622 qpair failed and we were unable to recover it. 00:22:23.622 [2024-05-15 01:09:35.910769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.910964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.910990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.622 qpair failed and we were unable to recover it. 00:22:23.622 [2024-05-15 01:09:35.911172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.911413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.911441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.622 qpair failed and we were unable to recover it. 00:22:23.622 [2024-05-15 01:09:35.911669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.911871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.911896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.622 qpair failed and we were unable to recover it. 00:22:23.622 [2024-05-15 01:09:35.912096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.912299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.912343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.622 qpair failed and we were unable to recover it. 00:22:23.622 [2024-05-15 01:09:35.912563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.912768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.912793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.622 qpair failed and we were unable to recover it. 00:22:23.622 [2024-05-15 01:09:35.913000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.913213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.913256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.622 qpair failed and we were unable to recover it. 00:22:23.622 [2024-05-15 01:09:35.913446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.913680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.913721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.622 qpair failed and we were unable to recover it. 00:22:23.622 [2024-05-15 01:09:35.913918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.914146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.914190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.622 qpair failed and we were unable to recover it. 00:22:23.622 [2024-05-15 01:09:35.914417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.914625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.914668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.622 qpair failed and we were unable to recover it. 00:22:23.622 [2024-05-15 01:09:35.914853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.915071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.915114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.622 qpair failed and we were unable to recover it. 00:22:23.622 [2024-05-15 01:09:35.915369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.915570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.915613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.622 qpair failed and we were unable to recover it. 00:22:23.622 [2024-05-15 01:09:35.915807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.916011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.916055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.622 qpair failed and we were unable to recover it. 00:22:23.622 [2024-05-15 01:09:35.916239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.916443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.916484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.622 qpair failed and we were unable to recover it. 00:22:23.622 [2024-05-15 01:09:35.916739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.916914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.916945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.622 qpair failed and we were unable to recover it. 00:22:23.622 [2024-05-15 01:09:35.917153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.917381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.917424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.622 qpair failed and we were unable to recover it. 00:22:23.622 [2024-05-15 01:09:35.917644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.917858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.917883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.622 qpair failed and we were unable to recover it. 00:22:23.622 [2024-05-15 01:09:35.918043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.918227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.918270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.622 qpair failed and we were unable to recover it. 00:22:23.622 [2024-05-15 01:09:35.918471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.918706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.918748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.622 qpair failed and we were unable to recover it. 00:22:23.622 [2024-05-15 01:09:35.918940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.622 [2024-05-15 01:09:35.919139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.919163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.623 qpair failed and we were unable to recover it. 00:22:23.623 [2024-05-15 01:09:35.919350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.919612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.919655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.623 qpair failed and we were unable to recover it. 00:22:23.623 [2024-05-15 01:09:35.919825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.920020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.920046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.623 qpair failed and we were unable to recover it. 00:22:23.623 [2024-05-15 01:09:35.920258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.920495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.920538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.623 qpair failed and we were unable to recover it. 00:22:23.623 [2024-05-15 01:09:35.920767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.920951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.920977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.623 qpair failed and we were unable to recover it. 00:22:23.623 [2024-05-15 01:09:35.921140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.921350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.921392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.623 qpair failed and we were unable to recover it. 00:22:23.623 [2024-05-15 01:09:35.921613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.921824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.921849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.623 qpair failed and we were unable to recover it. 00:22:23.623 [2024-05-15 01:09:35.922012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.922219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.922261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.623 qpair failed and we were unable to recover it. 00:22:23.623 [2024-05-15 01:09:35.922478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.922807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.922867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.623 qpair failed and we were unable to recover it. 00:22:23.623 [2024-05-15 01:09:35.923094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.923315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.923358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.623 qpair failed and we were unable to recover it. 00:22:23.623 [2024-05-15 01:09:35.923544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.923755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.923798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.623 qpair failed and we were unable to recover it. 00:22:23.623 [2024-05-15 01:09:35.924010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.924208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.924253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.623 qpair failed and we were unable to recover it. 00:22:23.623 [2024-05-15 01:09:35.924514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.924750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.924775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.623 qpair failed and we were unable to recover it. 00:22:23.623 [2024-05-15 01:09:35.924944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.925161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.925204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.623 qpair failed and we were unable to recover it. 00:22:23.623 [2024-05-15 01:09:35.925460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.925690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.925732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.623 qpair failed and we were unable to recover it. 00:22:23.623 [2024-05-15 01:09:35.925925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.926098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.926123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.623 qpair failed and we were unable to recover it. 00:22:23.623 [2024-05-15 01:09:35.926330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.926563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.926606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.623 qpair failed and we were unable to recover it. 00:22:23.623 [2024-05-15 01:09:35.926800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.927004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.927029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.623 qpair failed and we were unable to recover it. 00:22:23.623 [2024-05-15 01:09:35.927276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.927455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.927482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.623 qpair failed and we were unable to recover it. 00:22:23.623 [2024-05-15 01:09:35.927658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.927848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.927873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.623 qpair failed and we were unable to recover it. 00:22:23.623 [2024-05-15 01:09:35.928092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.928336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.928380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.623 qpair failed and we were unable to recover it. 00:22:23.623 [2024-05-15 01:09:35.928597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.928782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.928809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.623 qpair failed and we were unable to recover it. 00:22:23.623 [2024-05-15 01:09:35.929031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.929266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.929310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.623 qpair failed and we were unable to recover it. 00:22:23.623 [2024-05-15 01:09:35.929504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.929712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.929738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.623 qpair failed and we were unable to recover it. 00:22:23.623 [2024-05-15 01:09:35.929935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.930128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.930171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.623 qpair failed and we were unable to recover it. 00:22:23.623 [2024-05-15 01:09:35.930398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.930676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.930731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.623 qpair failed and we were unable to recover it. 00:22:23.623 [2024-05-15 01:09:35.930950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.931168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.931211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.623 qpair failed and we were unable to recover it. 00:22:23.623 [2024-05-15 01:09:35.931419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.931759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.931806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.623 qpair failed and we were unable to recover it. 00:22:23.623 [2024-05-15 01:09:35.932008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.932165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.932192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.623 qpair failed and we were unable to recover it. 00:22:23.623 [2024-05-15 01:09:35.932392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.932667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.623 [2024-05-15 01:09:35.932709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.623 qpair failed and we were unable to recover it. 00:22:23.623 [2024-05-15 01:09:35.932927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.933134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.933160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.624 qpair failed and we were unable to recover it. 00:22:23.624 [2024-05-15 01:09:35.933380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.933592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.933634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.624 qpair failed and we were unable to recover it. 00:22:23.624 [2024-05-15 01:09:35.933802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.934042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.934070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.624 qpair failed and we were unable to recover it. 00:22:23.624 [2024-05-15 01:09:35.934312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.934486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.934512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.624 qpair failed and we were unable to recover it. 00:22:23.624 [2024-05-15 01:09:35.934708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.934895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.934920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.624 qpair failed and we were unable to recover it. 00:22:23.624 [2024-05-15 01:09:35.935132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.935345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.935385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.624 qpair failed and we were unable to recover it. 00:22:23.624 [2024-05-15 01:09:35.935639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.935823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.935848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.624 qpair failed and we were unable to recover it. 00:22:23.624 [2024-05-15 01:09:35.936035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.936263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.936307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.624 qpair failed and we were unable to recover it. 00:22:23.624 [2024-05-15 01:09:35.936522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.936749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.936792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.624 qpair failed and we were unable to recover it. 00:22:23.624 [2024-05-15 01:09:35.937010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.937186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.937212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.624 qpair failed and we were unable to recover it. 00:22:23.624 [2024-05-15 01:09:35.937396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.937609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.937635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.624 qpair failed and we were unable to recover it. 00:22:23.624 [2024-05-15 01:09:35.937810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.938021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.938065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.624 qpair failed and we were unable to recover it. 00:22:23.624 [2024-05-15 01:09:35.938310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.938547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.938589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.624 qpair failed and we were unable to recover it. 00:22:23.624 [2024-05-15 01:09:35.938783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.939029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.939058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.624 qpair failed and we were unable to recover it. 00:22:23.624 [2024-05-15 01:09:35.939267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.939529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.939571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.624 qpair failed and we were unable to recover it. 00:22:23.624 [2024-05-15 01:09:35.939762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.939974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.940000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.624 qpair failed and we were unable to recover it. 00:22:23.624 [2024-05-15 01:09:35.940217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.940390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.940417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.624 qpair failed and we were unable to recover it. 00:22:23.624 [2024-05-15 01:09:35.940636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.940855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.940882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.624 qpair failed and we were unable to recover it. 00:22:23.624 [2024-05-15 01:09:35.941079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.941340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.941383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.624 qpair failed and we were unable to recover it. 00:22:23.624 [2024-05-15 01:09:35.941599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.941780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.941804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.624 qpair failed and we were unable to recover it. 00:22:23.624 [2024-05-15 01:09:35.941992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.942225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.942268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.624 qpair failed and we were unable to recover it. 00:22:23.624 [2024-05-15 01:09:35.942452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.942678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.942721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.624 qpair failed and we were unable to recover it. 00:22:23.624 [2024-05-15 01:09:35.942909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.943131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.943174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.624 qpair failed and we were unable to recover it. 00:22:23.624 [2024-05-15 01:09:35.943358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.943562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.943606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.624 qpair failed and we were unable to recover it. 00:22:23.624 [2024-05-15 01:09:35.943770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.943962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.624 [2024-05-15 01:09:35.943989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.624 qpair failed and we were unable to recover it. 00:22:23.625 [2024-05-15 01:09:35.944202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.944403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.944448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.625 qpair failed and we were unable to recover it. 00:22:23.625 [2024-05-15 01:09:35.944659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.944844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.944869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.625 qpair failed and we were unable to recover it. 00:22:23.625 [2024-05-15 01:09:35.945091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.945284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.945328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.625 qpair failed and we were unable to recover it. 00:22:23.625 [2024-05-15 01:09:35.945517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.945774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.945817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.625 qpair failed and we were unable to recover it. 00:22:23.625 [2024-05-15 01:09:35.946025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.946220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.946263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.625 qpair failed and we were unable to recover it. 00:22:23.625 [2024-05-15 01:09:35.946442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.946680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.946705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.625 qpair failed and we were unable to recover it. 00:22:23.625 [2024-05-15 01:09:35.946858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.947054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.947107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.625 qpair failed and we were unable to recover it. 00:22:23.625 [2024-05-15 01:09:35.947315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.947536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.947568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.625 qpair failed and we were unable to recover it. 00:22:23.625 [2024-05-15 01:09:35.947733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.947920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.947952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.625 qpair failed and we were unable to recover it. 00:22:23.625 [2024-05-15 01:09:35.948153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.948377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.948424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.625 qpair failed and we were unable to recover it. 00:22:23.625 [2024-05-15 01:09:35.948648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.948867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.948891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.625 qpair failed and we were unable to recover it. 00:22:23.625 [2024-05-15 01:09:35.949116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.949315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.949357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.625 qpair failed and we were unable to recover it. 00:22:23.625 [2024-05-15 01:09:35.949567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.949775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.949801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.625 qpair failed and we were unable to recover it. 00:22:23.625 [2024-05-15 01:09:35.949995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.950225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.950268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.625 qpair failed and we were unable to recover it. 00:22:23.625 [2024-05-15 01:09:35.950474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.950665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.950707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.625 qpair failed and we were unable to recover it. 00:22:23.625 [2024-05-15 01:09:35.950870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.951089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.951133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.625 qpair failed and we were unable to recover it. 00:22:23.625 [2024-05-15 01:09:35.951344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.951607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.951649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.625 qpair failed and we were unable to recover it. 00:22:23.625 [2024-05-15 01:09:35.951836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.952074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.952123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.625 qpair failed and we were unable to recover it. 00:22:23.625 [2024-05-15 01:09:35.952346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.952555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.952596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.625 qpair failed and we were unable to recover it. 00:22:23.625 [2024-05-15 01:09:35.952763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.952981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.953006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.625 qpair failed and we were unable to recover it. 00:22:23.625 [2024-05-15 01:09:35.953198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.953520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.953563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.625 qpair failed and we were unable to recover it. 00:22:23.625 [2024-05-15 01:09:35.953818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.954034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.954061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.625 qpair failed and we were unable to recover it. 00:22:23.625 [2024-05-15 01:09:35.954306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.954540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.954582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.625 qpair failed and we were unable to recover it. 00:22:23.625 [2024-05-15 01:09:35.954749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.954968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.954994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.625 qpair failed and we were unable to recover it. 00:22:23.625 [2024-05-15 01:09:35.955228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.955520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.955569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.625 qpair failed and we were unable to recover it. 00:22:23.625 [2024-05-15 01:09:35.955778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.956008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.956051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.625 qpair failed and we were unable to recover it. 00:22:23.625 [2024-05-15 01:09:35.956242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.956469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.956512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.625 qpair failed and we were unable to recover it. 00:22:23.625 [2024-05-15 01:09:35.956746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.956954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.956986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.625 qpair failed and we were unable to recover it. 00:22:23.625 [2024-05-15 01:09:35.957185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.957382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.957424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.625 qpair failed and we were unable to recover it. 00:22:23.625 [2024-05-15 01:09:35.957617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.625 [2024-05-15 01:09:35.957824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.957848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.626 qpair failed and we were unable to recover it. 00:22:23.626 [2024-05-15 01:09:35.958072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.958284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.958327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.626 qpair failed and we were unable to recover it. 00:22:23.626 [2024-05-15 01:09:35.958507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.958774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.958822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.626 qpair failed and we were unable to recover it. 00:22:23.626 [2024-05-15 01:09:35.959012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.959272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.959314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.626 qpair failed and we were unable to recover it. 00:22:23.626 [2024-05-15 01:09:35.959535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.959781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.959806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.626 qpair failed and we were unable to recover it. 00:22:23.626 [2024-05-15 01:09:35.959967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.960159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.960201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.626 qpair failed and we were unable to recover it. 00:22:23.626 [2024-05-15 01:09:35.960457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.960698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.960726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.626 qpair failed and we were unable to recover it. 00:22:23.626 [2024-05-15 01:09:35.960940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.961150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.961192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.626 qpair failed and we were unable to recover it. 00:22:23.626 [2024-05-15 01:09:35.961439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.961670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.961716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.626 qpair failed and we were unable to recover it. 00:22:23.626 [2024-05-15 01:09:35.961940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.962104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.962131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.626 qpair failed and we were unable to recover it. 00:22:23.626 [2024-05-15 01:09:35.962293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.962482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.962525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.626 qpair failed and we were unable to recover it. 00:22:23.626 [2024-05-15 01:09:35.962738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.962949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.962975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.626 qpair failed and we were unable to recover it. 00:22:23.626 [2024-05-15 01:09:35.963185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.963417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.963460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.626 qpair failed and we were unable to recover it. 00:22:23.626 [2024-05-15 01:09:35.963674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.963905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.963935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.626 qpair failed and we were unable to recover it. 00:22:23.626 [2024-05-15 01:09:35.964109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.964325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.964367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.626 qpair failed and we were unable to recover it. 00:22:23.626 [2024-05-15 01:09:35.964533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.964730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.964772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.626 qpair failed and we were unable to recover it. 00:22:23.626 [2024-05-15 01:09:35.964939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.965152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.965177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.626 qpair failed and we were unable to recover it. 00:22:23.626 [2024-05-15 01:09:35.965396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.965640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.965666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.626 qpair failed and we were unable to recover it. 00:22:23.626 [2024-05-15 01:09:35.965864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.966051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.966077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.626 qpair failed and we were unable to recover it. 00:22:23.626 [2024-05-15 01:09:35.966301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.966514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.966555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.626 qpair failed and we were unable to recover it. 00:22:23.626 [2024-05-15 01:09:35.966771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.966950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.966976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.626 qpair failed and we were unable to recover it. 00:22:23.626 [2024-05-15 01:09:35.967192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.967418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.967461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.626 qpair failed and we were unable to recover it. 00:22:23.626 [2024-05-15 01:09:35.967678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.967913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.967945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.626 qpair failed and we were unable to recover it. 00:22:23.626 [2024-05-15 01:09:35.968127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.968332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.968375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.626 qpair failed and we were unable to recover it. 00:22:23.626 [2024-05-15 01:09:35.968572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.968803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.968827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.626 qpair failed and we were unable to recover it. 00:22:23.626 [2024-05-15 01:09:35.969039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.969267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.969313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.626 qpair failed and we were unable to recover it. 00:22:23.626 [2024-05-15 01:09:35.969508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.969715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.969741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.626 qpair failed and we were unable to recover it. 00:22:23.626 [2024-05-15 01:09:35.969937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.970101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.970126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.626 qpair failed and we were unable to recover it. 00:22:23.626 [2024-05-15 01:09:35.970343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.970614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.970657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.626 qpair failed and we were unable to recover it. 00:22:23.626 [2024-05-15 01:09:35.970829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.970987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.626 [2024-05-15 01:09:35.971013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.626 qpair failed and we were unable to recover it. 00:22:23.626 [2024-05-15 01:09:35.971206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.971432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.971474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.627 qpair failed and we were unable to recover it. 00:22:23.627 [2024-05-15 01:09:35.971693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.971877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.971906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.627 qpair failed and we were unable to recover it. 00:22:23.627 [2024-05-15 01:09:35.972099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.972319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.972346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.627 qpair failed and we were unable to recover it. 00:22:23.627 [2024-05-15 01:09:35.972538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.972768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.972793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.627 qpair failed and we were unable to recover it. 00:22:23.627 [2024-05-15 01:09:35.972973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.973176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.973204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.627 qpair failed and we were unable to recover it. 00:22:23.627 [2024-05-15 01:09:35.973437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.973652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.973677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.627 qpair failed and we were unable to recover it. 00:22:23.627 [2024-05-15 01:09:35.973867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.974089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.974132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.627 qpair failed and we were unable to recover it. 00:22:23.627 [2024-05-15 01:09:35.974353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.974583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.974626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.627 qpair failed and we were unable to recover it. 00:22:23.627 [2024-05-15 01:09:35.974784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.974991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.975019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.627 qpair failed and we were unable to recover it. 00:22:23.627 [2024-05-15 01:09:35.975266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.975530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.975578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.627 qpair failed and we were unable to recover it. 00:22:23.627 [2024-05-15 01:09:35.975744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.975938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.975966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.627 qpair failed and we were unable to recover it. 00:22:23.627 [2024-05-15 01:09:35.976178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.976416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.976446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.627 qpair failed and we were unable to recover it. 00:22:23.627 [2024-05-15 01:09:35.976620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.976834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.976859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.627 qpair failed and we were unable to recover it. 00:22:23.627 [2024-05-15 01:09:35.977079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.977317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.977359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.627 qpair failed and we were unable to recover it. 00:22:23.627 [2024-05-15 01:09:35.977563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.977739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.977765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.627 qpair failed and we were unable to recover it. 00:22:23.627 [2024-05-15 01:09:35.977936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.978152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.978177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.627 qpair failed and we were unable to recover it. 00:22:23.627 [2024-05-15 01:09:35.978425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.978622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.978664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.627 qpair failed and we were unable to recover it. 00:22:23.627 [2024-05-15 01:09:35.978825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.979041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.979088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.627 qpair failed and we were unable to recover it. 00:22:23.627 [2024-05-15 01:09:35.979337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.979547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.979589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.627 qpair failed and we were unable to recover it. 00:22:23.627 [2024-05-15 01:09:35.979810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.979987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.980017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.627 qpair failed and we were unable to recover it. 00:22:23.627 [2024-05-15 01:09:35.980241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.980489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.980517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.627 qpair failed and we were unable to recover it. 00:22:23.627 [2024-05-15 01:09:35.980719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.980934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.980959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.627 qpair failed and we were unable to recover it. 00:22:23.627 [2024-05-15 01:09:35.981117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.981331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.981375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.627 qpair failed and we were unable to recover it. 00:22:23.627 [2024-05-15 01:09:35.981620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.981830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.981855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.627 qpair failed and we were unable to recover it. 00:22:23.627 [2024-05-15 01:09:35.982069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.982312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.982360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.627 qpair failed and we were unable to recover it. 00:22:23.627 [2024-05-15 01:09:35.982578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.982779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.982804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.627 qpair failed and we were unable to recover it. 00:22:23.627 [2024-05-15 01:09:35.983009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.983245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.983288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.627 qpair failed and we were unable to recover it. 00:22:23.627 [2024-05-15 01:09:35.983536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.983721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.983746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.627 qpair failed and we were unable to recover it. 00:22:23.627 [2024-05-15 01:09:35.983912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.984107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.984150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.627 qpair failed and we were unable to recover it. 00:22:23.627 [2024-05-15 01:09:35.984350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.984587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.627 [2024-05-15 01:09:35.984615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.627 qpair failed and we were unable to recover it. 00:22:23.628 [2024-05-15 01:09:35.984823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.628 [2024-05-15 01:09:35.985065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.628 [2024-05-15 01:09:35.985109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.628 qpair failed and we were unable to recover it. 00:22:23.628 [2024-05-15 01:09:35.985328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.628 [2024-05-15 01:09:35.985596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.628 [2024-05-15 01:09:35.985641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.628 qpair failed and we were unable to recover it. 00:22:23.628 [2024-05-15 01:09:35.985841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.904 [2024-05-15 01:09:35.986025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.904 [2024-05-15 01:09:35.986069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.904 qpair failed and we were unable to recover it. 00:22:23.904 [2024-05-15 01:09:35.986271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.904 [2024-05-15 01:09:35.986498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.904 [2024-05-15 01:09:35.986542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.904 qpair failed and we were unable to recover it. 00:22:23.904 [2024-05-15 01:09:35.986733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.904 [2024-05-15 01:09:35.986922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.904 [2024-05-15 01:09:35.986961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.904 qpair failed and we were unable to recover it. 00:22:23.904 [2024-05-15 01:09:35.987174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.904 [2024-05-15 01:09:35.987404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.904 [2024-05-15 01:09:35.987446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.904 qpair failed and we were unable to recover it. 00:22:23.904 [2024-05-15 01:09:35.987656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.904 [2024-05-15 01:09:35.987831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.904 [2024-05-15 01:09:35.987856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.904 qpair failed and we were unable to recover it. 00:22:23.904 [2024-05-15 01:09:35.988064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.904 [2024-05-15 01:09:35.988307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.904 [2024-05-15 01:09:35.988349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.904 qpair failed and we were unable to recover it. 00:22:23.904 [2024-05-15 01:09:35.988571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.904 [2024-05-15 01:09:35.988752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.904 [2024-05-15 01:09:35.988781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.904 qpair failed and we were unable to recover it. 00:22:23.904 [2024-05-15 01:09:35.988949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.904 [2024-05-15 01:09:35.989137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.904 [2024-05-15 01:09:35.989180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.904 qpair failed and we were unable to recover it. 00:22:23.904 [2024-05-15 01:09:35.989424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.989684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.989726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.905 qpair failed and we were unable to recover it. 00:22:23.905 [2024-05-15 01:09:35.989916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.990136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.990179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.905 qpair failed and we were unable to recover it. 00:22:23.905 [2024-05-15 01:09:35.990396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.990599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.990641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.905 qpair failed and we were unable to recover it. 00:22:23.905 [2024-05-15 01:09:35.990863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.991077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.991103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.905 qpair failed and we were unable to recover it. 00:22:23.905 [2024-05-15 01:09:35.991315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.991561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.991603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.905 qpair failed and we were unable to recover it. 00:22:23.905 [2024-05-15 01:09:35.991770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.991937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.991963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.905 qpair failed and we were unable to recover it. 00:22:23.905 [2024-05-15 01:09:35.992152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.992397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.992440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.905 qpair failed and we were unable to recover it. 00:22:23.905 [2024-05-15 01:09:35.992634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.992842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.992868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.905 qpair failed and we were unable to recover it. 00:22:23.905 [2024-05-15 01:09:35.993084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.993315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.993359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.905 qpair failed and we were unable to recover it. 00:22:23.905 [2024-05-15 01:09:35.993544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.993756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.993781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.905 qpair failed and we were unable to recover it. 00:22:23.905 [2024-05-15 01:09:35.993989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.994198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.994241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.905 qpair failed and we were unable to recover it. 00:22:23.905 [2024-05-15 01:09:35.994457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.994657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.994698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.905 qpair failed and we were unable to recover it. 00:22:23.905 [2024-05-15 01:09:35.994920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.995113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.995156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.905 qpair failed and we were unable to recover it. 00:22:23.905 [2024-05-15 01:09:35.995395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.995624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.995667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.905 qpair failed and we were unable to recover it. 00:22:23.905 [2024-05-15 01:09:35.995858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.996048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.996074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.905 qpair failed and we were unable to recover it. 00:22:23.905 [2024-05-15 01:09:35.996319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.996573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.996620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.905 qpair failed and we were unable to recover it. 00:22:23.905 [2024-05-15 01:09:35.996814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.997020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.997063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.905 qpair failed and we were unable to recover it. 00:22:23.905 [2024-05-15 01:09:35.997261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.997564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.997592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.905 qpair failed and we were unable to recover it. 00:22:23.905 [2024-05-15 01:09:35.997828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.998038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.998066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.905 qpair failed and we were unable to recover it. 00:22:23.905 [2024-05-15 01:09:35.998278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.998505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.998547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.905 qpair failed and we were unable to recover it. 00:22:23.905 [2024-05-15 01:09:35.998741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.998952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.998978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.905 qpair failed and we were unable to recover it. 00:22:23.905 [2024-05-15 01:09:35.999162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.999378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.999423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.905 qpair failed and we were unable to recover it. 00:22:23.905 [2024-05-15 01:09:35.999667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.999880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:35.999905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.905 qpair failed and we were unable to recover it. 00:22:23.905 [2024-05-15 01:09:36.000104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:36.000317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:36.000359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.905 qpair failed and we were unable to recover it. 00:22:23.905 [2024-05-15 01:09:36.000600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:36.000808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:36.000835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.905 qpair failed and we were unable to recover it. 00:22:23.905 [2024-05-15 01:09:36.001073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:36.001342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:36.001389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.905 qpair failed and we were unable to recover it. 00:22:23.905 [2024-05-15 01:09:36.001614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:36.001796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:36.001821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.905 qpair failed and we were unable to recover it. 00:22:23.905 [2024-05-15 01:09:36.002063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:36.002324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:36.002371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.905 qpair failed and we were unable to recover it. 00:22:23.905 [2024-05-15 01:09:36.002592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:36.002774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:36.002801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.905 qpair failed and we were unable to recover it. 00:22:23.905 [2024-05-15 01:09:36.003043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:36.003266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.905 [2024-05-15 01:09:36.003314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.906 qpair failed and we were unable to recover it. 00:22:23.906 [2024-05-15 01:09:36.003498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.003733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.003758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.906 qpair failed and we were unable to recover it. 00:22:23.906 [2024-05-15 01:09:36.003982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.004192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.004234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.906 qpair failed and we were unable to recover it. 00:22:23.906 [2024-05-15 01:09:36.004444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.004707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.004748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.906 qpair failed and we were unable to recover it. 00:22:23.906 [2024-05-15 01:09:36.004928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.005123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.005148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.906 qpair failed and we were unable to recover it. 00:22:23.906 [2024-05-15 01:09:36.005342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.005569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.005617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.906 qpair failed and we were unable to recover it. 00:22:23.906 [2024-05-15 01:09:36.005847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.006042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.006069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.906 qpair failed and we were unable to recover it. 00:22:23.906 [2024-05-15 01:09:36.006316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.006609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.006651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.906 qpair failed and we were unable to recover it. 00:22:23.906 [2024-05-15 01:09:36.006872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.007054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.007080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.906 qpair failed and we were unable to recover it. 00:22:23.906 [2024-05-15 01:09:36.007267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.007531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.007578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.906 qpair failed and we were unable to recover it. 00:22:23.906 [2024-05-15 01:09:36.007796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.008007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.008051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.906 qpair failed and we were unable to recover it. 00:22:23.906 [2024-05-15 01:09:36.008264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.008506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.008553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.906 qpair failed and we were unable to recover it. 00:22:23.906 [2024-05-15 01:09:36.008742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.008906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.008936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.906 qpair failed and we were unable to recover it. 00:22:23.906 [2024-05-15 01:09:36.009150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.009386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.009436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.906 qpair failed and we were unable to recover it. 00:22:23.906 [2024-05-15 01:09:36.009661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.009851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.009876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.906 qpair failed and we were unable to recover it. 00:22:23.906 [2024-05-15 01:09:36.010066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.010303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.010331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.906 qpair failed and we were unable to recover it. 00:22:23.906 [2024-05-15 01:09:36.010563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.010794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.010819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.906 qpair failed and we were unable to recover it. 00:22:23.906 [2024-05-15 01:09:36.010998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.011230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.011272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.906 qpair failed and we were unable to recover it. 00:22:23.906 [2024-05-15 01:09:36.011512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.011717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.011760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.906 qpair failed and we were unable to recover it. 00:22:23.906 [2024-05-15 01:09:36.011952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.012140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.012183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.906 qpair failed and we were unable to recover it. 00:22:23.906 [2024-05-15 01:09:36.012378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.012619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.012661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.906 qpair failed and we were unable to recover it. 00:22:23.906 [2024-05-15 01:09:36.012822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.013033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.013075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.906 qpair failed and we were unable to recover it. 00:22:23.906 [2024-05-15 01:09:36.013331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.013599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.013646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.906 qpair failed and we were unable to recover it. 00:22:23.906 [2024-05-15 01:09:36.013870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.014106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.014135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.906 qpair failed and we were unable to recover it. 00:22:23.906 [2024-05-15 01:09:36.014393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.014684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.014727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.906 qpair failed and we were unable to recover it. 00:22:23.906 [2024-05-15 01:09:36.014949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.015109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.015136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.906 qpair failed and we were unable to recover it. 00:22:23.906 [2024-05-15 01:09:36.015364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.015574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.015617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.906 qpair failed and we were unable to recover it. 00:22:23.906 [2024-05-15 01:09:36.015807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.015993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.016020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.906 qpair failed and we were unable to recover it. 00:22:23.906 [2024-05-15 01:09:36.016263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.016548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.016590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.906 qpair failed and we were unable to recover it. 00:22:23.906 [2024-05-15 01:09:36.016775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.016985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.906 [2024-05-15 01:09:36.017014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.906 qpair failed and we were unable to recover it. 00:22:23.906 [2024-05-15 01:09:36.017270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.017494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.017541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.907 qpair failed and we were unable to recover it. 00:22:23.907 [2024-05-15 01:09:36.017719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.017955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.017981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.907 qpair failed and we were unable to recover it. 00:22:23.907 [2024-05-15 01:09:36.018166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.018371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.018413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.907 qpair failed and we were unable to recover it. 00:22:23.907 [2024-05-15 01:09:36.018573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.018794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.018819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.907 qpair failed and we were unable to recover it. 00:22:23.907 [2024-05-15 01:09:36.019025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.019255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.019302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.907 qpair failed and we were unable to recover it. 00:22:23.907 [2024-05-15 01:09:36.019530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.019753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.019798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.907 qpair failed and we were unable to recover it. 00:22:23.907 [2024-05-15 01:09:36.019993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.020223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.020266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.907 qpair failed and we were unable to recover it. 00:22:23.907 [2024-05-15 01:09:36.020558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.020751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.020777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.907 qpair failed and we were unable to recover it. 00:22:23.907 [2024-05-15 01:09:36.021001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.021178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.021221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.907 qpair failed and we were unable to recover it. 00:22:23.907 [2024-05-15 01:09:36.021434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.021650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.021693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.907 qpair failed and we were unable to recover it. 00:22:23.907 [2024-05-15 01:09:36.021902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.022152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.022199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.907 qpair failed and we were unable to recover it. 00:22:23.907 [2024-05-15 01:09:36.022427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.022648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.022675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.907 qpair failed and we were unable to recover it. 00:22:23.907 [2024-05-15 01:09:36.022840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.023028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.023070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.907 qpair failed and we were unable to recover it. 00:22:23.907 [2024-05-15 01:09:36.023287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.023522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.023564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.907 qpair failed and we were unable to recover it. 00:22:23.907 [2024-05-15 01:09:36.023731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.023898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.023923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.907 qpair failed and we were unable to recover it. 00:22:23.907 [2024-05-15 01:09:36.024153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.024354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.024395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.907 qpair failed and we were unable to recover it. 00:22:23.907 [2024-05-15 01:09:36.024591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.024819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.024844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.907 qpair failed and we were unable to recover it. 00:22:23.907 [2024-05-15 01:09:36.025075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.025282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.025324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.907 qpair failed and we were unable to recover it. 00:22:23.907 [2024-05-15 01:09:36.025515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.025766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.025807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.907 qpair failed and we were unable to recover it. 00:22:23.907 [2024-05-15 01:09:36.026050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.026274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.026321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.907 qpair failed and we were unable to recover it. 00:22:23.907 [2024-05-15 01:09:36.026520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.026747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.026794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.907 qpair failed and we were unable to recover it. 00:22:23.907 [2024-05-15 01:09:36.027086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.027341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.027387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.907 qpair failed and we were unable to recover it. 00:22:23.907 [2024-05-15 01:09:36.027580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.027855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.027879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.907 qpair failed and we were unable to recover it. 00:22:23.907 [2024-05-15 01:09:36.028084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.028314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.028357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.907 qpair failed and we were unable to recover it. 00:22:23.907 [2024-05-15 01:09:36.028572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.028741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.028765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.907 qpair failed and we were unable to recover it. 00:22:23.907 [2024-05-15 01:09:36.028949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.029156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.029198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.907 qpair failed and we were unable to recover it. 00:22:23.907 [2024-05-15 01:09:36.029376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.029596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.029623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.907 qpair failed and we were unable to recover it. 00:22:23.907 [2024-05-15 01:09:36.029828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.030022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.030065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.907 qpair failed and we were unable to recover it. 00:22:23.907 [2024-05-15 01:09:36.030257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.030464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.030506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.907 qpair failed and we were unable to recover it. 00:22:23.907 [2024-05-15 01:09:36.030725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.907 [2024-05-15 01:09:36.030899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.030923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.908 qpair failed and we were unable to recover it. 00:22:23.908 [2024-05-15 01:09:36.031147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.031362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.031411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.908 qpair failed and we were unable to recover it. 00:22:23.908 [2024-05-15 01:09:36.031683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.031918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.031952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.908 qpair failed and we were unable to recover it. 00:22:23.908 [2024-05-15 01:09:36.032142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.032375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.032417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.908 qpair failed and we were unable to recover it. 00:22:23.908 [2024-05-15 01:09:36.032631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.032842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.032868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.908 qpair failed and we were unable to recover it. 00:22:23.908 [2024-05-15 01:09:36.033082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.033335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.033378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.908 qpair failed and we were unable to recover it. 00:22:23.908 [2024-05-15 01:09:36.033694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.033905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.033934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.908 qpair failed and we were unable to recover it. 00:22:23.908 [2024-05-15 01:09:36.034155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.034367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.034409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.908 qpair failed and we were unable to recover it. 00:22:23.908 [2024-05-15 01:09:36.034627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.034843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.034867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.908 qpair failed and we were unable to recover it. 00:22:23.908 [2024-05-15 01:09:36.035077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.035340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.035391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.908 qpair failed and we were unable to recover it. 00:22:23.908 [2024-05-15 01:09:36.035601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.035865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.035890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.908 qpair failed and we were unable to recover it. 00:22:23.908 [2024-05-15 01:09:36.036076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.036320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.036350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.908 qpair failed and we were unable to recover it. 00:22:23.908 [2024-05-15 01:09:36.036613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.036828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.036852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.908 qpair failed and we were unable to recover it. 00:22:23.908 [2024-05-15 01:09:36.037052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.037253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.037295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.908 qpair failed and we were unable to recover it. 00:22:23.908 [2024-05-15 01:09:36.037548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.037838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.037890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.908 qpair failed and we were unable to recover it. 00:22:23.908 [2024-05-15 01:09:36.038074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.038225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.038251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.908 qpair failed and we were unable to recover it. 00:22:23.908 [2024-05-15 01:09:36.038494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.038947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.039001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.908 qpair failed and we were unable to recover it. 00:22:23.908 [2024-05-15 01:09:36.039202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.039429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.039471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.908 qpair failed and we were unable to recover it. 00:22:23.908 [2024-05-15 01:09:36.039729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.040034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.040060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.908 qpair failed and we were unable to recover it. 00:22:23.908 [2024-05-15 01:09:36.040319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.040723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.040784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.908 qpair failed and we were unable to recover it. 00:22:23.908 [2024-05-15 01:09:36.041032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.041255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.041300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.908 qpair failed and we were unable to recover it. 00:22:23.908 [2024-05-15 01:09:36.041503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.041815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.041870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.908 qpair failed and we were unable to recover it. 00:22:23.908 [2024-05-15 01:09:36.042116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.042353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.042381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.908 qpair failed and we were unable to recover it. 00:22:23.908 [2024-05-15 01:09:36.042616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.042848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.042872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.908 qpair failed and we were unable to recover it. 00:22:23.908 [2024-05-15 01:09:36.043098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.043442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.043494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.908 qpair failed and we were unable to recover it. 00:22:23.908 [2024-05-15 01:09:36.043718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.043953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.043979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.908 qpair failed and we were unable to recover it. 00:22:23.908 [2024-05-15 01:09:36.044232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.044474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.044523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.908 qpair failed and we were unable to recover it. 00:22:23.908 [2024-05-15 01:09:36.044758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.045001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.045027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.908 qpair failed and we were unable to recover it. 00:22:23.908 [2024-05-15 01:09:36.045227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.045429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.045473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.908 qpair failed and we were unable to recover it. 00:22:23.908 [2024-05-15 01:09:36.045697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.045940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.908 [2024-05-15 01:09:36.045966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.908 qpair failed and we were unable to recover it. 00:22:23.908 [2024-05-15 01:09:36.046160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.046404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.046446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.909 qpair failed and we were unable to recover it. 00:22:23.909 [2024-05-15 01:09:36.046670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.046898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.046947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.909 qpair failed and we were unable to recover it. 00:22:23.909 [2024-05-15 01:09:36.047236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.047492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.047535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.909 qpair failed and we were unable to recover it. 00:22:23.909 [2024-05-15 01:09:36.047694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.047885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.047911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.909 qpair failed and we were unable to recover it. 00:22:23.909 [2024-05-15 01:09:36.048103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.048318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.048346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.909 qpair failed and we were unable to recover it. 00:22:23.909 [2024-05-15 01:09:36.048586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.048836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.048861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.909 qpair failed and we were unable to recover it. 00:22:23.909 [2024-05-15 01:09:36.049093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.049321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.049349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.909 qpair failed and we were unable to recover it. 00:22:23.909 [2024-05-15 01:09:36.049547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.049785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.049809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.909 qpair failed and we were unable to recover it. 00:22:23.909 [2024-05-15 01:09:36.050002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.050188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.050230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.909 qpair failed and we were unable to recover it. 00:22:23.909 [2024-05-15 01:09:36.050479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.050728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.050770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.909 qpair failed and we were unable to recover it. 00:22:23.909 [2024-05-15 01:09:36.050971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.051197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.051239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.909 qpair failed and we were unable to recover it. 00:22:23.909 [2024-05-15 01:09:36.051497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.051737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.051765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.909 qpair failed and we were unable to recover it. 00:22:23.909 [2024-05-15 01:09:36.052018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.052276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.052327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.909 qpair failed and we were unable to recover it. 00:22:23.909 [2024-05-15 01:09:36.052588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.052810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.052834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.909 qpair failed and we were unable to recover it. 00:22:23.909 [2024-05-15 01:09:36.053066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.053260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.053303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.909 qpair failed and we were unable to recover it. 00:22:23.909 [2024-05-15 01:09:36.053532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.053770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.053795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.909 qpair failed and we were unable to recover it. 00:22:23.909 [2024-05-15 01:09:36.054044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.054254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.054295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.909 qpair failed and we were unable to recover it. 00:22:23.909 [2024-05-15 01:09:36.054507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.054743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.054768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.909 qpair failed and we were unable to recover it. 00:22:23.909 [2024-05-15 01:09:36.054948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.055163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.055206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.909 qpair failed and we were unable to recover it. 00:22:23.909 [2024-05-15 01:09:36.055399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.055548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.055574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.909 qpair failed and we were unable to recover it. 00:22:23.909 [2024-05-15 01:09:36.055793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.056012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.056041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.909 qpair failed and we were unable to recover it. 00:22:23.909 [2024-05-15 01:09:36.056238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.056445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.056487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.909 qpair failed and we were unable to recover it. 00:22:23.909 [2024-05-15 01:09:36.056697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.056870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.056895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.909 qpair failed and we were unable to recover it. 00:22:23.909 [2024-05-15 01:09:36.057098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.909 [2024-05-15 01:09:36.057345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.057387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.910 qpair failed and we were unable to recover it. 00:22:23.910 [2024-05-15 01:09:36.057570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.057788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.057812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.910 qpair failed and we were unable to recover it. 00:22:23.910 [2024-05-15 01:09:36.058060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.058303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.058345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.910 qpair failed and we were unable to recover it. 00:22:23.910 [2024-05-15 01:09:36.058569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.058800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.058841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.910 qpair failed and we were unable to recover it. 00:22:23.910 [2024-05-15 01:09:36.059048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.059278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.059319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.910 qpair failed and we were unable to recover it. 00:22:23.910 [2024-05-15 01:09:36.059572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.059768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.059792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.910 qpair failed and we were unable to recover it. 00:22:23.910 [2024-05-15 01:09:36.060012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.060238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.060280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.910 qpair failed and we were unable to recover it. 00:22:23.910 [2024-05-15 01:09:36.060560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.060811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.060836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.910 qpair failed and we were unable to recover it. 00:22:23.910 [2024-05-15 01:09:36.061006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.061211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.061252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.910 qpair failed and we were unable to recover it. 00:22:23.910 [2024-05-15 01:09:36.061490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.061707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.061732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.910 qpair failed and we were unable to recover it. 00:22:23.910 [2024-05-15 01:09:36.061995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.062231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.062259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.910 qpair failed and we were unable to recover it. 00:22:23.910 [2024-05-15 01:09:36.062463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.062671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.062713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.910 qpair failed and we were unable to recover it. 00:22:23.910 [2024-05-15 01:09:36.062898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.063092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.063118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.910 qpair failed and we were unable to recover it. 00:22:23.910 [2024-05-15 01:09:36.063346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.063725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.063776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.910 qpair failed and we were unable to recover it. 00:22:23.910 [2024-05-15 01:09:36.063979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.064218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.064258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.910 qpair failed and we were unable to recover it. 00:22:23.910 [2024-05-15 01:09:36.064446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.064656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.064698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.910 qpair failed and we were unable to recover it. 00:22:23.910 [2024-05-15 01:09:36.064876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.065113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.065139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.910 qpair failed and we were unable to recover it. 00:22:23.910 [2024-05-15 01:09:36.065332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.065669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.065711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.910 qpair failed and we were unable to recover it. 00:22:23.910 [2024-05-15 01:09:36.065911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.066086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.066112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.910 qpair failed and we were unable to recover it. 00:22:23.910 [2024-05-15 01:09:36.066303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.066532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.066574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.910 qpair failed and we were unable to recover it. 00:22:23.910 [2024-05-15 01:09:36.066764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.066953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.066980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.910 qpair failed and we were unable to recover it. 00:22:23.910 [2024-05-15 01:09:36.067166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.067357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.067384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.910 qpair failed and we were unable to recover it. 00:22:23.910 [2024-05-15 01:09:36.067596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.067794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.067819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.910 qpair failed and we were unable to recover it. 00:22:23.910 [2024-05-15 01:09:36.068001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.068301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.068343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.910 qpair failed and we were unable to recover it. 00:22:23.910 [2024-05-15 01:09:36.068599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.068796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.068822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.910 qpair failed and we were unable to recover it. 00:22:23.910 [2024-05-15 01:09:36.069052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.069249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.069292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.910 qpair failed and we were unable to recover it. 00:22:23.910 [2024-05-15 01:09:36.069480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.069713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.069738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.910 qpair failed and we were unable to recover it. 00:22:23.910 [2024-05-15 01:09:36.069886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.070062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.070104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.910 qpair failed and we were unable to recover it. 00:22:23.910 [2024-05-15 01:09:36.070351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.070639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.070682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.910 qpair failed and we were unable to recover it. 00:22:23.910 [2024-05-15 01:09:36.070913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.910 [2024-05-15 01:09:36.071136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.071162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.911 qpair failed and we were unable to recover it. 00:22:23.911 [2024-05-15 01:09:36.071344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.071630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.071673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.911 qpair failed and we were unable to recover it. 00:22:23.911 [2024-05-15 01:09:36.071919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.072117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.072143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.911 qpair failed and we were unable to recover it. 00:22:23.911 [2024-05-15 01:09:36.072363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.072625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.072666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.911 qpair failed and we were unable to recover it. 00:22:23.911 [2024-05-15 01:09:36.072865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.073058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.073085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.911 qpair failed and we were unable to recover it. 00:22:23.911 [2024-05-15 01:09:36.073274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.073534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.073576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.911 qpair failed and we were unable to recover it. 00:22:23.911 [2024-05-15 01:09:36.073766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.073962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.073988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.911 qpair failed and we were unable to recover it. 00:22:23.911 [2024-05-15 01:09:36.074188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.074443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.074485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.911 qpair failed and we were unable to recover it. 00:22:23.911 [2024-05-15 01:09:36.074729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.074965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.074991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.911 qpair failed and we were unable to recover it. 00:22:23.911 [2024-05-15 01:09:36.075177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.075413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.075441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.911 qpair failed and we were unable to recover it. 00:22:23.911 [2024-05-15 01:09:36.075683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.075876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.075900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.911 qpair failed and we were unable to recover it. 00:22:23.911 [2024-05-15 01:09:36.076095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.076269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.076295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.911 qpair failed and we were unable to recover it. 00:22:23.911 [2024-05-15 01:09:36.076554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.076777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.076802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.911 qpair failed and we were unable to recover it. 00:22:23.911 [2024-05-15 01:09:36.076956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.077203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.077244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.911 qpair failed and we were unable to recover it. 00:22:23.911 [2024-05-15 01:09:36.077498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.077807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.077850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.911 qpair failed and we were unable to recover it. 00:22:23.911 [2024-05-15 01:09:36.078087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.078358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.078416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.911 qpair failed and we were unable to recover it. 00:22:23.911 [2024-05-15 01:09:36.078671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.078868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.078893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.911 qpair failed and we were unable to recover it. 00:22:23.911 [2024-05-15 01:09:36.079115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.079376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.079419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.911 qpair failed and we were unable to recover it. 00:22:23.911 [2024-05-15 01:09:36.079610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.079836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.079861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.911 qpair failed and we were unable to recover it. 00:22:23.911 [2024-05-15 01:09:36.080084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.080311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.080354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.911 qpair failed and we were unable to recover it. 00:22:23.911 [2024-05-15 01:09:36.080535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.080741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.080767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.911 qpair failed and we were unable to recover it. 00:22:23.911 [2024-05-15 01:09:36.081001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.081213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.081255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.911 qpair failed and we were unable to recover it. 00:22:23.911 [2024-05-15 01:09:36.081456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.081667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.081708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.911 qpair failed and we were unable to recover it. 00:22:23.911 [2024-05-15 01:09:36.081907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.082090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.082117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.911 qpair failed and we were unable to recover it. 00:22:23.911 [2024-05-15 01:09:36.082322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.082593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.082635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.911 qpair failed and we were unable to recover it. 00:22:23.911 [2024-05-15 01:09:36.082840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.083058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.083102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.911 qpair failed and we were unable to recover it. 00:22:23.911 [2024-05-15 01:09:36.083362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.083563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.083591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.911 qpair failed and we were unable to recover it. 00:22:23.911 [2024-05-15 01:09:36.083915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.084151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.084177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.911 qpair failed and we were unable to recover it. 00:22:23.911 [2024-05-15 01:09:36.084422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.084607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.084633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.911 qpair failed and we were unable to recover it. 00:22:23.911 [2024-05-15 01:09:36.084821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.085066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.085110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.911 qpair failed and we were unable to recover it. 00:22:23.911 [2024-05-15 01:09:36.085288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.911 [2024-05-15 01:09:36.085496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.085537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.912 qpair failed and we were unable to recover it. 00:22:23.912 [2024-05-15 01:09:36.085721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.085906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.085936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.912 qpair failed and we were unable to recover it. 00:22:23.912 [2024-05-15 01:09:36.086145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.086387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.086417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.912 qpair failed and we were unable to recover it. 00:22:23.912 [2024-05-15 01:09:36.086621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.086862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.086887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.912 qpair failed and we were unable to recover it. 00:22:23.912 [2024-05-15 01:09:36.087068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.087254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.087294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.912 qpair failed and we were unable to recover it. 00:22:23.912 [2024-05-15 01:09:36.087513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.087717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.087758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.912 qpair failed and we were unable to recover it. 00:22:23.912 [2024-05-15 01:09:36.087940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.088139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.088164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.912 qpair failed and we were unable to recover it. 00:22:23.912 [2024-05-15 01:09:36.088373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.088575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.088619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.912 qpair failed and we were unable to recover it. 00:22:23.912 [2024-05-15 01:09:36.088813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.089035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.089079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.912 qpair failed and we were unable to recover it. 00:22:23.912 [2024-05-15 01:09:36.089340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.089640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.089692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.912 qpair failed and we were unable to recover it. 00:22:23.912 [2024-05-15 01:09:36.089892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.090103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.090129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.912 qpair failed and we were unable to recover it. 00:22:23.912 [2024-05-15 01:09:36.090340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.090607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.090635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.912 qpair failed and we were unable to recover it. 00:22:23.912 [2024-05-15 01:09:36.090844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.091010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.091038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.912 qpair failed and we were unable to recover it. 00:22:23.912 [2024-05-15 01:09:36.091254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.091677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.091726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.912 qpair failed and we were unable to recover it. 00:22:23.912 [2024-05-15 01:09:36.091964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.092124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.092148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.912 qpair failed and we were unable to recover it. 00:22:23.912 [2024-05-15 01:09:36.092410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.092640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.092682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.912 qpair failed and we were unable to recover it. 00:22:23.912 [2024-05-15 01:09:36.092879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.093101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.093145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.912 qpair failed and we were unable to recover it. 00:22:23.912 [2024-05-15 01:09:36.093369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.093599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.093641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.912 qpair failed and we were unable to recover it. 00:22:23.912 [2024-05-15 01:09:36.093836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.094034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.094060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.912 qpair failed and we were unable to recover it. 00:22:23.912 [2024-05-15 01:09:36.094370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.094624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.094666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.912 qpair failed and we were unable to recover it. 00:22:23.912 [2024-05-15 01:09:36.094887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.095053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.095079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.912 qpair failed and we were unable to recover it. 00:22:23.912 [2024-05-15 01:09:36.095267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.095496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.095538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.912 qpair failed and we were unable to recover it. 00:22:23.912 [2024-05-15 01:09:36.095737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.095972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.095997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.912 qpair failed and we were unable to recover it. 00:22:23.912 [2024-05-15 01:09:36.096237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.096486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.096529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.912 qpair failed and we were unable to recover it. 00:22:23.912 [2024-05-15 01:09:36.096732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.096945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.096971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.912 qpair failed and we were unable to recover it. 00:22:23.912 [2024-05-15 01:09:36.097192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.097518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.097575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.912 qpair failed and we were unable to recover it. 00:22:23.912 [2024-05-15 01:09:36.097784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.098007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.098033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.912 qpair failed and we were unable to recover it. 00:22:23.912 [2024-05-15 01:09:36.098243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.098469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.098511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.912 qpair failed and we were unable to recover it. 00:22:23.912 [2024-05-15 01:09:36.098741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.098940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.098983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.912 qpair failed and we were unable to recover it. 00:22:23.912 [2024-05-15 01:09:36.099173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.912 [2024-05-15 01:09:36.099415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.099457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.913 qpair failed and we were unable to recover it. 00:22:23.913 [2024-05-15 01:09:36.099650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.099843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.099872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.913 qpair failed and we were unable to recover it. 00:22:23.913 [2024-05-15 01:09:36.100073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.100283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.100325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.913 qpair failed and we were unable to recover it. 00:22:23.913 [2024-05-15 01:09:36.100521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.100755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.100780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.913 qpair failed and we were unable to recover it. 00:22:23.913 [2024-05-15 01:09:36.100993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.101189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.101214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.913 qpair failed and we were unable to recover it. 00:22:23.913 [2024-05-15 01:09:36.101413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.101706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.101749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.913 qpair failed and we were unable to recover it. 00:22:23.913 [2024-05-15 01:09:36.101943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.102159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.102200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.913 qpair failed and we were unable to recover it. 00:22:23.913 [2024-05-15 01:09:36.102408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.102629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.102657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.913 qpair failed and we were unable to recover it. 00:22:23.913 [2024-05-15 01:09:36.102940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.103147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.103171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.913 qpair failed and we were unable to recover it. 00:22:23.913 [2024-05-15 01:09:36.103388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.103613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.103658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.913 qpair failed and we were unable to recover it. 00:22:23.913 [2024-05-15 01:09:36.103913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.104130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.104175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.913 qpair failed and we were unable to recover it. 00:22:23.913 [2024-05-15 01:09:36.104387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.104624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.104672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.913 qpair failed and we were unable to recover it. 00:22:23.913 [2024-05-15 01:09:36.104872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.105045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.105072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.913 qpair failed and we were unable to recover it. 00:22:23.913 [2024-05-15 01:09:36.105294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.105647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.105693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.913 qpair failed and we were unable to recover it. 00:22:23.913 [2024-05-15 01:09:36.105918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.106125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.106150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.913 qpair failed and we were unable to recover it. 00:22:23.913 [2024-05-15 01:09:36.106446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.106709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.106751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.913 qpair failed and we were unable to recover it. 00:22:23.913 [2024-05-15 01:09:36.106962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.107232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.107275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.913 qpair failed and we were unable to recover it. 00:22:23.913 [2024-05-15 01:09:36.107531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.107883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.107938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.913 qpair failed and we were unable to recover it. 00:22:23.913 [2024-05-15 01:09:36.108148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.108384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.108425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.913 qpair failed and we were unable to recover it. 00:22:23.913 [2024-05-15 01:09:36.108628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.108824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.108848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.913 qpair failed and we were unable to recover it. 00:22:23.913 [2024-05-15 01:09:36.109046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.109256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.109297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.913 qpair failed and we were unable to recover it. 00:22:23.913 [2024-05-15 01:09:36.109476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.109810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.109873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.913 qpair failed and we were unable to recover it. 00:22:23.913 [2024-05-15 01:09:36.110098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.110341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.110383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.913 qpair failed and we were unable to recover it. 00:22:23.913 [2024-05-15 01:09:36.110599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.110831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.110856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.913 qpair failed and we were unable to recover it. 00:22:23.913 [2024-05-15 01:09:36.111042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.111300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.913 [2024-05-15 01:09:36.111342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.913 qpair failed and we were unable to recover it. 00:22:23.914 [2024-05-15 01:09:36.111574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.111789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.111815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.914 qpair failed and we were unable to recover it. 00:22:23.914 [2024-05-15 01:09:36.111989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.112225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.112253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.914 qpair failed and we were unable to recover it. 00:22:23.914 [2024-05-15 01:09:36.112521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.112911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.112988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.914 qpair failed and we were unable to recover it. 00:22:23.914 [2024-05-15 01:09:36.113207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.113439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.113481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.914 qpair failed and we were unable to recover it. 00:22:23.914 [2024-05-15 01:09:36.113732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.113912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.113945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.914 qpair failed and we were unable to recover it. 00:22:23.914 [2024-05-15 01:09:36.114149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.114392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.114434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.914 qpair failed and we were unable to recover it. 00:22:23.914 [2024-05-15 01:09:36.114616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.114850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.114879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.914 qpair failed and we were unable to recover it. 00:22:23.914 [2024-05-15 01:09:36.115105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.115464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.115514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.914 qpair failed and we were unable to recover it. 00:22:23.914 [2024-05-15 01:09:36.115763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.115943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.115969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.914 qpair failed and we were unable to recover it. 00:22:23.914 [2024-05-15 01:09:36.116135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.116372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.116399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.914 qpair failed and we were unable to recover it. 00:22:23.914 [2024-05-15 01:09:36.116656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.116869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.116895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.914 qpair failed and we were unable to recover it. 00:22:23.914 [2024-05-15 01:09:36.117124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.117307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.117349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.914 qpair failed and we were unable to recover it. 00:22:23.914 [2024-05-15 01:09:36.117568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.117774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.117800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.914 qpair failed and we were unable to recover it. 00:22:23.914 [2024-05-15 01:09:36.118007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.118244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.118271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.914 qpair failed and we were unable to recover it. 00:22:23.914 [2024-05-15 01:09:36.118515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.118721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.118747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.914 qpair failed and we were unable to recover it. 00:22:23.914 [2024-05-15 01:09:36.118947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.119144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.119169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.914 qpair failed and we were unable to recover it. 00:22:23.914 [2024-05-15 01:09:36.119414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.119616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.119658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.914 qpair failed and we were unable to recover it. 00:22:23.914 [2024-05-15 01:09:36.119857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.120053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.120079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.914 qpair failed and we were unable to recover it. 00:22:23.914 [2024-05-15 01:09:36.120287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.120495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.120539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.914 qpair failed and we were unable to recover it. 00:22:23.914 [2024-05-15 01:09:36.120729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.120924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.120955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.914 qpair failed and we were unable to recover it. 00:22:23.914 [2024-05-15 01:09:36.121174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.121392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.121434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.914 qpair failed and we were unable to recover it. 00:22:23.914 [2024-05-15 01:09:36.121644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.121825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.121852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.914 qpair failed and we were unable to recover it. 00:22:23.914 [2024-05-15 01:09:36.122075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.122264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.122306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.914 qpair failed and we were unable to recover it. 00:22:23.914 [2024-05-15 01:09:36.122497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.122724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.122767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.914 qpair failed and we were unable to recover it. 00:22:23.914 [2024-05-15 01:09:36.122992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.123205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.123247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.914 qpair failed and we were unable to recover it. 00:22:23.914 [2024-05-15 01:09:36.123452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.123660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.123701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.914 qpair failed and we were unable to recover it. 00:22:23.914 [2024-05-15 01:09:36.123866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.124061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.124086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.914 qpair failed and we were unable to recover it. 00:22:23.914 [2024-05-15 01:09:36.124312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.124536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.124580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.914 qpair failed and we were unable to recover it. 00:22:23.914 [2024-05-15 01:09:36.124809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.125049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.125092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.914 qpair failed and we were unable to recover it. 00:22:23.914 [2024-05-15 01:09:36.125287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.914 [2024-05-15 01:09:36.125525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.125568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.915 qpair failed and we were unable to recover it. 00:22:23.915 [2024-05-15 01:09:36.125757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.125944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.125970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.915 qpair failed and we were unable to recover it. 00:22:23.915 [2024-05-15 01:09:36.126179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.126446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.126489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.915 qpair failed and we were unable to recover it. 00:22:23.915 [2024-05-15 01:09:36.126681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.126901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.126925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.915 qpair failed and we were unable to recover it. 00:22:23.915 [2024-05-15 01:09:36.127127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.127390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.127433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.915 qpair failed and we were unable to recover it. 00:22:23.915 [2024-05-15 01:09:36.127687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.127869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.127894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.915 qpair failed and we were unable to recover it. 00:22:23.915 [2024-05-15 01:09:36.128122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.128318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.128360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.915 qpair failed and we were unable to recover it. 00:22:23.915 [2024-05-15 01:09:36.128579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.128793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.128818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.915 qpair failed and we were unable to recover it. 00:22:23.915 [2024-05-15 01:09:36.129033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.129270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.129312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.915 qpair failed and we were unable to recover it. 00:22:23.915 [2024-05-15 01:09:36.129534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.129768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.129811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.915 qpair failed and we were unable to recover it. 00:22:23.915 [2024-05-15 01:09:36.130035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.130318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.130360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.915 qpair failed and we were unable to recover it. 00:22:23.915 [2024-05-15 01:09:36.130578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.130759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.130784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.915 qpair failed and we were unable to recover it. 00:22:23.915 [2024-05-15 01:09:36.130995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.131232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.131275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.915 qpair failed and we were unable to recover it. 00:22:23.915 [2024-05-15 01:09:36.131526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.131756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.131781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.915 qpair failed and we were unable to recover it. 00:22:23.915 [2024-05-15 01:09:36.132007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.132199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.132241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.915 qpair failed and we were unable to recover it. 00:22:23.915 [2024-05-15 01:09:36.132455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.132807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.132858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.915 qpair failed and we were unable to recover it. 00:22:23.915 [2024-05-15 01:09:36.133076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.133314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.133357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.915 qpair failed and we were unable to recover it. 00:22:23.915 [2024-05-15 01:09:36.133587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.133796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.133821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.915 qpair failed and we were unable to recover it. 00:22:23.915 [2024-05-15 01:09:36.134061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.134267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.134310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.915 qpair failed and we were unable to recover it. 00:22:23.915 [2024-05-15 01:09:36.134505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.134716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.134742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.915 qpair failed and we were unable to recover it. 00:22:23.915 [2024-05-15 01:09:36.134963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.135225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.135267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.915 qpair failed and we were unable to recover it. 00:22:23.915 [2024-05-15 01:09:36.135484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.135763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.135806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.915 qpair failed and we were unable to recover it. 00:22:23.915 [2024-05-15 01:09:36.136058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.136472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.136523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.915 qpair failed and we were unable to recover it. 00:22:23.915 [2024-05-15 01:09:36.136739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.136944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.136969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.915 qpair failed and we were unable to recover it. 00:22:23.915 [2024-05-15 01:09:36.137148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.137406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.137446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.915 qpair failed and we were unable to recover it. 00:22:23.915 [2024-05-15 01:09:36.137659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.137860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.137885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.915 qpair failed and we were unable to recover it. 00:22:23.915 [2024-05-15 01:09:36.138091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.138272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.138314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.915 qpair failed and we were unable to recover it. 00:22:23.915 [2024-05-15 01:09:36.138564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.138869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.138920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.915 qpair failed and we were unable to recover it. 00:22:23.915 [2024-05-15 01:09:36.139124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.139331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.139374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.915 qpair failed and we were unable to recover it. 00:22:23.915 [2024-05-15 01:09:36.139606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.139791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.915 [2024-05-15 01:09:36.139817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.915 qpair failed and we were unable to recover it. 00:22:23.915 [2024-05-15 01:09:36.140065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.140389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.140438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.916 qpair failed and we were unable to recover it. 00:22:23.916 [2024-05-15 01:09:36.140647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.140869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.140895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.916 qpair failed and we were unable to recover it. 00:22:23.916 [2024-05-15 01:09:36.141133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.141359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.141402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.916 qpair failed and we were unable to recover it. 00:22:23.916 [2024-05-15 01:09:36.141594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.141825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.141849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.916 qpair failed and we were unable to recover it. 00:22:23.916 [2024-05-15 01:09:36.142072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.142305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.142347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.916 qpair failed and we were unable to recover it. 00:22:23.916 [2024-05-15 01:09:36.142530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.142770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.142796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.916 qpair failed and we were unable to recover it. 00:22:23.916 [2024-05-15 01:09:36.143028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.143278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.143307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.916 qpair failed and we were unable to recover it. 00:22:23.916 [2024-05-15 01:09:36.143541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.143763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.143787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.916 qpair failed and we were unable to recover it. 00:22:23.916 [2024-05-15 01:09:36.144040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.144360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.144389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.916 qpair failed and we were unable to recover it. 00:22:23.916 [2024-05-15 01:09:36.144590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.144771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.144797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.916 qpair failed and we were unable to recover it. 00:22:23.916 [2024-05-15 01:09:36.145041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.145306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.145349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.916 qpair failed and we were unable to recover it. 00:22:23.916 [2024-05-15 01:09:36.145606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.145807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.145832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.916 qpair failed and we were unable to recover it. 00:22:23.916 [2024-05-15 01:09:36.146080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.146362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.146411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.916 qpair failed and we were unable to recover it. 00:22:23.916 [2024-05-15 01:09:36.146631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.146798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.146822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.916 qpair failed and we were unable to recover it. 00:22:23.916 [2024-05-15 01:09:36.147071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.147299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.147346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.916 qpair failed and we were unable to recover it. 00:22:23.916 [2024-05-15 01:09:36.147553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.147778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.147803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.916 qpair failed and we were unable to recover it. 00:22:23.916 [2024-05-15 01:09:36.148059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.148340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.148391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.916 qpair failed and we were unable to recover it. 00:22:23.916 [2024-05-15 01:09:36.148621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.148821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.148846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.916 qpair failed and we were unable to recover it. 00:22:23.916 [2024-05-15 01:09:36.149063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.149268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.149311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.916 qpair failed and we were unable to recover it. 00:22:23.916 [2024-05-15 01:09:36.149536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.149750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.149794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.916 qpair failed and we were unable to recover it. 00:22:23.916 [2024-05-15 01:09:36.149976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.150162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.150205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.916 qpair failed and we were unable to recover it. 00:22:23.916 [2024-05-15 01:09:36.150456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.150752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.150795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.916 qpair failed and we were unable to recover it. 00:22:23.916 [2024-05-15 01:09:36.151008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.151245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.151273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.916 qpair failed and we were unable to recover it. 00:22:23.916 [2024-05-15 01:09:36.151510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.151746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.151788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.916 qpair failed and we were unable to recover it. 00:22:23.916 [2024-05-15 01:09:36.151993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.152205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.152233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.916 qpair failed and we were unable to recover it. 00:22:23.916 [2024-05-15 01:09:36.152443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.152666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.152692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.916 qpair failed and we were unable to recover it. 00:22:23.916 [2024-05-15 01:09:36.152882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.153104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.153147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.916 qpair failed and we were unable to recover it. 00:22:23.916 [2024-05-15 01:09:36.153335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.153570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.153614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.916 qpair failed and we were unable to recover it. 00:22:23.916 [2024-05-15 01:09:36.153806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.154025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.154074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.916 qpair failed and we were unable to recover it. 00:22:23.916 [2024-05-15 01:09:36.154291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.154507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.916 [2024-05-15 01:09:36.154550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.916 qpair failed and we were unable to recover it. 00:22:23.917 [2024-05-15 01:09:36.154764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.154982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.155018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.917 qpair failed and we were unable to recover it. 00:22:23.917 [2024-05-15 01:09:36.155327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.155590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.155634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.917 qpair failed and we were unable to recover it. 00:22:23.917 [2024-05-15 01:09:36.155829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.156023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.156051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.917 qpair failed and we were unable to recover it. 00:22:23.917 [2024-05-15 01:09:36.156246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.156480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.156523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.917 qpair failed and we were unable to recover it. 00:22:23.917 [2024-05-15 01:09:36.156742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.156975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.157001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.917 qpair failed and we were unable to recover it. 00:22:23.917 [2024-05-15 01:09:36.157216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.157458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.157500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.917 qpair failed and we were unable to recover it. 00:22:23.917 [2024-05-15 01:09:36.157693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.157883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.157907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.917 qpair failed and we were unable to recover it. 00:22:23.917 [2024-05-15 01:09:36.158113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.158365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.158393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.917 qpair failed and we were unable to recover it. 00:22:23.917 [2024-05-15 01:09:36.158599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.158813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.158839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.917 qpair failed and we were unable to recover it. 00:22:23.917 [2024-05-15 01:09:36.159004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.159226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.159269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.917 qpair failed and we were unable to recover it. 00:22:23.917 [2024-05-15 01:09:36.159489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.159693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.159735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.917 qpair failed and we were unable to recover it. 00:22:23.917 [2024-05-15 01:09:36.159951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.160170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.160219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.917 qpair failed and we were unable to recover it. 00:22:23.917 [2024-05-15 01:09:36.160409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.160644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.160687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.917 qpair failed and we were unable to recover it. 00:22:23.917 [2024-05-15 01:09:36.160851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.161070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.161114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.917 qpair failed and we were unable to recover it. 00:22:23.917 [2024-05-15 01:09:36.161333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.161600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.161644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.917 qpair failed and we were unable to recover it. 00:22:23.917 [2024-05-15 01:09:36.161833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.162077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.162120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.917 qpair failed and we were unable to recover it. 00:22:23.917 [2024-05-15 01:09:36.162309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.162526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.162573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.917 qpair failed and we were unable to recover it. 00:22:23.917 [2024-05-15 01:09:36.162770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.162996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.163039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.917 qpair failed and we were unable to recover it. 00:22:23.917 [2024-05-15 01:09:36.163233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.163466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.163509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.917 qpair failed and we were unable to recover it. 00:22:23.917 [2024-05-15 01:09:36.163725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.163912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.163942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.917 qpair failed and we were unable to recover it. 00:22:23.917 [2024-05-15 01:09:36.164169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.164376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.164419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.917 qpair failed and we were unable to recover it. 00:22:23.917 [2024-05-15 01:09:36.164629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.164812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.164836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.917 qpair failed and we were unable to recover it. 00:22:23.917 [2024-05-15 01:09:36.165052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.165288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.165330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.917 qpair failed and we were unable to recover it. 00:22:23.917 [2024-05-15 01:09:36.165547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.165742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.165783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.917 qpair failed and we were unable to recover it. 00:22:23.917 [2024-05-15 01:09:36.166038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.166251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.166295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.917 qpair failed and we were unable to recover it. 00:22:23.917 [2024-05-15 01:09:36.166485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.166693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.166718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.917 qpair failed and we were unable to recover it. 00:22:23.917 [2024-05-15 01:09:36.166926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.167149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.167192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.917 qpair failed and we were unable to recover it. 00:22:23.917 [2024-05-15 01:09:36.167392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.167618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.167646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.917 qpair failed and we were unable to recover it. 00:22:23.917 [2024-05-15 01:09:36.167855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.168045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.168089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.917 qpair failed and we were unable to recover it. 00:22:23.917 [2024-05-15 01:09:36.168286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.168490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.917 [2024-05-15 01:09:36.168517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.918 qpair failed and we were unable to recover it. 00:22:23.918 [2024-05-15 01:09:36.168721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.168899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.168924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.918 qpair failed and we were unable to recover it. 00:22:23.918 [2024-05-15 01:09:36.169128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.169332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.169375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.918 qpair failed and we were unable to recover it. 00:22:23.918 [2024-05-15 01:09:36.169601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.169783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.169808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.918 qpair failed and we were unable to recover it. 00:22:23.918 [2024-05-15 01:09:36.170021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.170283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.170326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.918 qpair failed and we were unable to recover it. 00:22:23.918 [2024-05-15 01:09:36.170527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.170943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.170989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.918 qpair failed and we were unable to recover it. 00:22:23.918 [2024-05-15 01:09:36.171240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.171559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.171619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.918 qpair failed and we were unable to recover it. 00:22:23.918 [2024-05-15 01:09:36.171841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.172026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.172052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.918 qpair failed and we were unable to recover it. 00:22:23.918 [2024-05-15 01:09:36.172292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.172505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.172550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.918 qpair failed and we were unable to recover it. 00:22:23.918 [2024-05-15 01:09:36.172746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.172937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.172968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.918 qpair failed and we were unable to recover it. 00:22:23.918 [2024-05-15 01:09:36.173157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.173377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.173421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.918 qpair failed and we were unable to recover it. 00:22:23.918 [2024-05-15 01:09:36.173613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.173800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.173825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.918 qpair failed and we were unable to recover it. 00:22:23.918 [2024-05-15 01:09:36.174044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.174287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.174331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.918 qpair failed and we were unable to recover it. 00:22:23.918 [2024-05-15 01:09:36.174530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.174741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.174766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.918 qpair failed and we were unable to recover it. 00:22:23.918 [2024-05-15 01:09:36.174940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.175107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.175131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.918 qpair failed and we were unable to recover it. 00:22:23.918 [2024-05-15 01:09:36.175377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.175654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.175697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.918 qpair failed and we were unable to recover it. 00:22:23.918 [2024-05-15 01:09:36.175920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.176119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.176146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.918 qpair failed and we were unable to recover it. 00:22:23.918 [2024-05-15 01:09:36.176345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.176598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.176642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.918 qpair failed and we were unable to recover it. 00:22:23.918 [2024-05-15 01:09:36.176838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.177053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.177096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.918 qpair failed and we were unable to recover it. 00:22:23.918 [2024-05-15 01:09:36.177292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.177527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.177575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.918 qpair failed and we were unable to recover it. 00:22:23.918 [2024-05-15 01:09:36.177776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.177996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.178023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.918 qpair failed and we were unable to recover it. 00:22:23.918 [2024-05-15 01:09:36.178236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.178472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.178513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.918 qpair failed and we were unable to recover it. 00:22:23.918 [2024-05-15 01:09:36.178732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.178949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.178975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.918 qpair failed and we were unable to recover it. 00:22:23.918 [2024-05-15 01:09:36.179191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.179409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.179436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.918 qpair failed and we were unable to recover it. 00:22:23.918 [2024-05-15 01:09:36.179653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.179869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.179894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.918 qpair failed and we were unable to recover it. 00:22:23.918 [2024-05-15 01:09:36.180097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.180302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.918 [2024-05-15 01:09:36.180345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.918 qpair failed and we were unable to recover it. 00:22:23.919 [2024-05-15 01:09:36.180567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.180749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.180775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.919 qpair failed and we were unable to recover it. 00:22:23.919 [2024-05-15 01:09:36.180967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.181189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.181218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.919 qpair failed and we were unable to recover it. 00:22:23.919 [2024-05-15 01:09:36.181453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.181693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.181735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.919 qpair failed and we were unable to recover it. 00:22:23.919 [2024-05-15 01:09:36.181902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.182098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.182145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.919 qpair failed and we were unable to recover it. 00:22:23.919 [2024-05-15 01:09:36.182358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.182566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.182607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.919 qpair failed and we were unable to recover it. 00:22:23.919 [2024-05-15 01:09:36.182765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.182950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.182976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.919 qpair failed and we were unable to recover it. 00:22:23.919 [2024-05-15 01:09:36.183223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.183588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.183637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.919 qpair failed and we were unable to recover it. 00:22:23.919 [2024-05-15 01:09:36.183828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.184018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.184062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.919 qpair failed and we were unable to recover it. 00:22:23.919 [2024-05-15 01:09:36.184325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.184730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.184783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.919 qpair failed and we were unable to recover it. 00:22:23.919 [2024-05-15 01:09:36.184993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.185214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.185239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.919 qpair failed and we were unable to recover it. 00:22:23.919 [2024-05-15 01:09:36.185417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.185620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.185647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.919 qpair failed and we were unable to recover it. 00:22:23.919 [2024-05-15 01:09:36.185837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.186028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.186071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.919 qpair failed and we were unable to recover it. 00:22:23.919 [2024-05-15 01:09:36.186289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.186516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.186560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.919 qpair failed and we were unable to recover it. 00:22:23.919 [2024-05-15 01:09:36.186736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.186924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.186961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.919 qpair failed and we were unable to recover it. 00:22:23.919 [2024-05-15 01:09:36.187190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.187411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.187453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.919 qpair failed and we were unable to recover it. 00:22:23.919 [2024-05-15 01:09:36.187640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.187814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.187839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.919 qpair failed and we were unable to recover it. 00:22:23.919 [2024-05-15 01:09:36.188053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.188319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.188361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.919 qpair failed and we were unable to recover it. 00:22:23.919 [2024-05-15 01:09:36.188576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.188751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.188776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.919 qpair failed and we were unable to recover it. 00:22:23.919 [2024-05-15 01:09:36.188969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.189194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.189222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.919 qpair failed and we were unable to recover it. 00:22:23.919 [2024-05-15 01:09:36.189460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.189665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.189708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.919 qpair failed and we were unable to recover it. 00:22:23.919 [2024-05-15 01:09:36.189870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.190078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.190122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.919 qpair failed and we were unable to recover it. 00:22:23.919 [2024-05-15 01:09:36.190343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.190552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.190595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.919 qpair failed and we were unable to recover it. 00:22:23.919 [2024-05-15 01:09:36.190763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.190958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.190983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.919 qpair failed and we were unable to recover it. 00:22:23.919 [2024-05-15 01:09:36.191177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.191402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.191444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.919 qpair failed and we were unable to recover it. 00:22:23.919 [2024-05-15 01:09:36.191656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.191895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.191920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.919 qpair failed and we were unable to recover it. 00:22:23.919 [2024-05-15 01:09:36.192115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.192349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.192391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.919 qpair failed and we were unable to recover it. 00:22:23.919 [2024-05-15 01:09:36.192619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.192856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.192880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.919 qpair failed and we were unable to recover it. 00:22:23.919 [2024-05-15 01:09:36.193048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.193339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.193389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.919 qpair failed and we were unable to recover it. 00:22:23.919 [2024-05-15 01:09:36.193638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.193869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.193894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.919 qpair failed and we were unable to recover it. 00:22:23.919 [2024-05-15 01:09:36.194093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.194286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.919 [2024-05-15 01:09:36.194328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.920 qpair failed and we were unable to recover it. 00:22:23.920 [2024-05-15 01:09:36.194517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.194747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.194790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.920 qpair failed and we were unable to recover it. 00:22:23.920 [2024-05-15 01:09:36.195037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.195370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.195410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.920 qpair failed and we were unable to recover it. 00:22:23.920 [2024-05-15 01:09:36.195640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.195819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.195846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.920 qpair failed and we were unable to recover it. 00:22:23.920 [2024-05-15 01:09:36.196065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.196298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.196328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.920 qpair failed and we were unable to recover it. 00:22:23.920 [2024-05-15 01:09:36.196546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.196776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.196801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.920 qpair failed and we were unable to recover it. 00:22:23.920 [2024-05-15 01:09:36.197019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.197252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.197295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.920 qpair failed and we were unable to recover it. 00:22:23.920 [2024-05-15 01:09:36.197500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.197661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.197688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.920 qpair failed and we were unable to recover it. 00:22:23.920 [2024-05-15 01:09:36.197908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.198136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.198180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.920 qpair failed and we were unable to recover it. 00:22:23.920 [2024-05-15 01:09:36.198403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.198630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.198672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.920 qpair failed and we were unable to recover it. 00:22:23.920 [2024-05-15 01:09:36.198875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.199045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.199072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.920 qpair failed and we were unable to recover it. 00:22:23.920 [2024-05-15 01:09:36.199276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.199661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.199721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.920 qpair failed and we were unable to recover it. 00:22:23.920 [2024-05-15 01:09:36.199913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.200134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.200178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.920 qpair failed and we were unable to recover it. 00:22:23.920 [2024-05-15 01:09:36.200424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.200659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.200687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.920 qpair failed and we were unable to recover it. 00:22:23.920 [2024-05-15 01:09:36.200864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.201049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.201093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.920 qpair failed and we were unable to recover it. 00:22:23.920 [2024-05-15 01:09:36.201337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.201631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.201684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.920 qpair failed and we were unable to recover it. 00:22:23.920 [2024-05-15 01:09:36.201870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.202123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.202168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.920 qpair failed and we were unable to recover it. 00:22:23.920 [2024-05-15 01:09:36.202390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.202596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.202639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.920 qpair failed and we were unable to recover it. 00:22:23.920 [2024-05-15 01:09:36.202815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.202990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.203019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.920 qpair failed and we were unable to recover it. 00:22:23.920 [2024-05-15 01:09:36.203256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.203483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.203525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.920 qpair failed and we were unable to recover it. 00:22:23.920 [2024-05-15 01:09:36.203771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.204034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.204077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.920 qpair failed and we were unable to recover it. 00:22:23.920 [2024-05-15 01:09:36.204265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.204469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.204511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.920 qpair failed and we were unable to recover it. 00:22:23.920 [2024-05-15 01:09:36.204724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.204912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.204944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.920 qpair failed and we were unable to recover it. 00:22:23.920 [2024-05-15 01:09:36.205133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.205369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.205411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.920 qpair failed and we were unable to recover it. 00:22:23.920 [2024-05-15 01:09:36.205617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.205798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.205823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.920 qpair failed and we were unable to recover it. 00:22:23.920 [2024-05-15 01:09:36.206046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.206228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.206271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.920 qpair failed and we were unable to recover it. 00:22:23.920 [2024-05-15 01:09:36.206469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.206693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.206720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.920 qpair failed and we were unable to recover it. 00:22:23.920 [2024-05-15 01:09:36.206916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.207133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.207176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.920 qpair failed and we were unable to recover it. 00:22:23.920 [2024-05-15 01:09:36.207397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.207644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.207686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.920 qpair failed and we were unable to recover it. 00:22:23.920 [2024-05-15 01:09:36.207912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.208134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.920 [2024-05-15 01:09:36.208181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.920 qpair failed and we were unable to recover it. 00:22:23.920 [2024-05-15 01:09:36.208409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.208649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.208693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.921 qpair failed and we were unable to recover it. 00:22:23.921 [2024-05-15 01:09:36.208862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.209071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.209116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.921 qpair failed and we were unable to recover it. 00:22:23.921 [2024-05-15 01:09:36.209311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.209542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.209585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.921 qpair failed and we were unable to recover it. 00:22:23.921 [2024-05-15 01:09:36.209782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.209984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.210026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.921 qpair failed and we were unable to recover it. 00:22:23.921 [2024-05-15 01:09:36.210218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.210505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.210563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.921 qpair failed and we were unable to recover it. 00:22:23.921 [2024-05-15 01:09:36.210738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.210902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.210935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.921 qpair failed and we were unable to recover it. 00:22:23.921 [2024-05-15 01:09:36.211106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.211362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.211404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.921 qpair failed and we were unable to recover it. 00:22:23.921 [2024-05-15 01:09:36.211620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.211862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.211887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.921 qpair failed and we were unable to recover it. 00:22:23.921 [2024-05-15 01:09:36.212062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.212249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.212293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.921 qpair failed and we were unable to recover it. 00:22:23.921 [2024-05-15 01:09:36.212504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.212741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.212769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.921 qpair failed and we were unable to recover it. 00:22:23.921 [2024-05-15 01:09:36.212993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.213225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.213267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.921 qpair failed and we were unable to recover it. 00:22:23.921 [2024-05-15 01:09:36.213457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.213663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.213705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.921 qpair failed and we were unable to recover it. 00:22:23.921 [2024-05-15 01:09:36.213899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.214128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.214172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.921 qpair failed and we were unable to recover it. 00:22:23.921 [2024-05-15 01:09:36.214394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.214598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.214641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.921 qpair failed and we were unable to recover it. 00:22:23.921 [2024-05-15 01:09:36.214823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.214995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.215024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.921 qpair failed and we were unable to recover it. 00:22:23.921 [2024-05-15 01:09:36.215267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.215527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.215568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.921 qpair failed and we were unable to recover it. 00:22:23.921 [2024-05-15 01:09:36.215725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.215922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.215953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.921 qpair failed and we were unable to recover it. 00:22:23.921 [2024-05-15 01:09:36.216173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.216369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.216411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.921 qpair failed and we were unable to recover it. 00:22:23.921 [2024-05-15 01:09:36.216618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.216851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.216875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.921 qpair failed and we were unable to recover it. 00:22:23.921 [2024-05-15 01:09:36.217096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.217293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.217334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.921 qpair failed and we were unable to recover it. 00:22:23.921 [2024-05-15 01:09:36.217577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.217781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.217805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.921 qpair failed and we were unable to recover it. 00:22:23.921 [2024-05-15 01:09:36.218020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.218226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.218268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.921 qpair failed and we were unable to recover it. 00:22:23.921 [2024-05-15 01:09:36.218493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.218753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.218794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.921 qpair failed and we were unable to recover it. 00:22:23.921 [2024-05-15 01:09:36.219005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.219233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.219276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.921 qpair failed and we were unable to recover it. 00:22:23.921 [2024-05-15 01:09:36.219496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.219689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.219713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.921 qpair failed and we were unable to recover it. 00:22:23.921 [2024-05-15 01:09:36.219938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.220146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.220188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.921 qpair failed and we were unable to recover it. 00:22:23.921 [2024-05-15 01:09:36.220429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.220751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.220804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.921 qpair failed and we were unable to recover it. 00:22:23.921 [2024-05-15 01:09:36.221046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.221279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.221307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.921 qpair failed and we were unable to recover it. 00:22:23.921 [2024-05-15 01:09:36.221567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.221749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.221775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.921 qpair failed and we were unable to recover it. 00:22:23.921 [2024-05-15 01:09:36.221998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.222209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.921 [2024-05-15 01:09:36.222250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.922 qpair failed and we were unable to recover it. 00:22:23.922 [2024-05-15 01:09:36.222500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.222776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.222825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.922 qpair failed and we were unable to recover it. 00:22:23.922 [2024-05-15 01:09:36.223037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.223244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.223287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.922 qpair failed and we were unable to recover it. 00:22:23.922 [2024-05-15 01:09:36.223489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.223753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.223796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.922 qpair failed and we were unable to recover it. 00:22:23.922 [2024-05-15 01:09:36.224036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.224247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.224289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.922 qpair failed and we were unable to recover it. 00:22:23.922 [2024-05-15 01:09:36.224454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.224641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.224668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.922 qpair failed and we were unable to recover it. 00:22:23.922 [2024-05-15 01:09:36.224829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.225043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.225087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.922 qpair failed and we were unable to recover it. 00:22:23.922 [2024-05-15 01:09:36.225304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.225508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.225550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.922 qpair failed and we were unable to recover it. 00:22:23.922 [2024-05-15 01:09:36.225734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.225951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.225993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.922 qpair failed and we were unable to recover it. 00:22:23.922 [2024-05-15 01:09:36.226239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.226448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.226490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.922 qpair failed and we were unable to recover it. 00:22:23.922 [2024-05-15 01:09:36.226672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.226877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.226901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.922 qpair failed and we were unable to recover it. 00:22:23.922 [2024-05-15 01:09:36.227122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.227389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.227444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.922 qpair failed and we were unable to recover it. 00:22:23.922 [2024-05-15 01:09:36.227649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.227855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.227879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.922 qpair failed and we were unable to recover it. 00:22:23.922 [2024-05-15 01:09:36.228099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.228306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.228350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.922 qpair failed and we were unable to recover it. 00:22:23.922 [2024-05-15 01:09:36.228602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.228793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.228818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.922 qpair failed and we were unable to recover it. 00:22:23.922 [2024-05-15 01:09:36.228998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.229237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.229278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.922 qpair failed and we were unable to recover it. 00:22:23.922 [2024-05-15 01:09:36.229465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.229674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.229701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.922 qpair failed and we were unable to recover it. 00:22:23.922 [2024-05-15 01:09:36.229889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.230079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.230123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.922 qpair failed and we were unable to recover it. 00:22:23.922 [2024-05-15 01:09:36.230343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.230573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.230614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.922 qpair failed and we were unable to recover it. 00:22:23.922 [2024-05-15 01:09:36.230817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.231031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.231083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.922 qpair failed and we were unable to recover it. 00:22:23.922 [2024-05-15 01:09:36.231279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.231580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.231635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.922 qpair failed and we were unable to recover it. 00:22:23.922 [2024-05-15 01:09:36.231789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.232023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.232067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.922 qpair failed and we were unable to recover it. 00:22:23.922 [2024-05-15 01:09:36.232289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.232514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.232556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.922 qpair failed and we were unable to recover it. 00:22:23.922 [2024-05-15 01:09:36.232756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.232946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.232971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.922 qpair failed and we were unable to recover it. 00:22:23.922 [2024-05-15 01:09:36.233219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.233529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.233578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.922 qpair failed and we were unable to recover it. 00:22:23.922 [2024-05-15 01:09:36.233792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.233999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.922 [2024-05-15 01:09:36.234028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.922 qpair failed and we were unable to recover it. 00:22:23.922 [2024-05-15 01:09:36.234295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.234649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.234703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.923 qpair failed and we were unable to recover it. 00:22:23.923 [2024-05-15 01:09:36.234926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.235103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.235128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.923 qpair failed and we were unable to recover it. 00:22:23.923 [2024-05-15 01:09:36.235384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.235648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.235690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.923 qpair failed and we were unable to recover it. 00:22:23.923 [2024-05-15 01:09:36.235909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.236109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.236134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.923 qpair failed and we were unable to recover it. 00:22:23.923 [2024-05-15 01:09:36.236327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.236553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.236595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.923 qpair failed and we were unable to recover it. 00:22:23.923 [2024-05-15 01:09:36.236811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.236999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.237025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.923 qpair failed and we were unable to recover it. 00:22:23.923 [2024-05-15 01:09:36.237210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.237439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.237481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.923 qpair failed and we were unable to recover it. 00:22:23.923 [2024-05-15 01:09:36.237697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.237892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.237917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.923 qpair failed and we were unable to recover it. 00:22:23.923 [2024-05-15 01:09:36.238130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.238359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.238402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.923 qpair failed and we were unable to recover it. 00:22:23.923 [2024-05-15 01:09:36.238647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.238815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.238840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.923 qpair failed and we were unable to recover it. 00:22:23.923 [2024-05-15 01:09:36.239076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.239328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.239381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.923 qpair failed and we were unable to recover it. 00:22:23.923 [2024-05-15 01:09:36.239624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.239827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.239851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.923 qpair failed and we were unable to recover it. 00:22:23.923 [2024-05-15 01:09:36.240114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.240417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.240459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.923 qpair failed and we were unable to recover it. 00:22:23.923 [2024-05-15 01:09:36.240707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.240941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.240967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.923 qpair failed and we were unable to recover it. 00:22:23.923 [2024-05-15 01:09:36.241162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.241356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.241399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.923 qpair failed and we were unable to recover it. 00:22:23.923 [2024-05-15 01:09:36.241620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.241863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.241888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.923 qpair failed and we were unable to recover it. 00:22:23.923 [2024-05-15 01:09:36.242111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.242326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.242369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.923 qpair failed and we were unable to recover it. 00:22:23.923 [2024-05-15 01:09:36.242589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.242947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.242972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.923 qpair failed and we were unable to recover it. 00:22:23.923 [2024-05-15 01:09:36.243166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.243359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.243402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.923 qpair failed and we were unable to recover it. 00:22:23.923 [2024-05-15 01:09:36.243647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.243853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.243879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.923 qpair failed and we were unable to recover it. 00:22:23.923 [2024-05-15 01:09:36.244097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.244331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.244373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.923 qpair failed and we were unable to recover it. 00:22:23.923 [2024-05-15 01:09:36.244588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.244791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.244816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.923 qpair failed and we were unable to recover it. 00:22:23.923 [2024-05-15 01:09:36.245031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.245270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.245312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.923 qpair failed and we were unable to recover it. 00:22:23.923 [2024-05-15 01:09:36.245532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.245707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.245732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.923 qpair failed and we were unable to recover it. 00:22:23.923 [2024-05-15 01:09:36.245958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.246171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.246215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.923 qpair failed and we were unable to recover it. 00:22:23.923 [2024-05-15 01:09:36.246565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.246790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.246816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.923 qpair failed and we were unable to recover it. 00:22:23.923 [2024-05-15 01:09:36.247073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.247281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.247322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.923 qpair failed and we were unable to recover it. 00:22:23.923 [2024-05-15 01:09:36.247561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.247738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.247764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.923 qpair failed and we were unable to recover it. 00:22:23.923 [2024-05-15 01:09:36.247969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.248158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.248200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.923 qpair failed and we were unable to recover it. 00:22:23.923 [2024-05-15 01:09:36.248384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.248630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.923 [2024-05-15 01:09:36.248671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.923 qpair failed and we were unable to recover it. 00:22:23.924 [2024-05-15 01:09:36.248858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.249091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.249140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.924 qpair failed and we were unable to recover it. 00:22:23.924 [2024-05-15 01:09:36.249395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.249705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.249760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.924 qpair failed and we were unable to recover it. 00:22:23.924 [2024-05-15 01:09:36.249956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.250239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.250298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.924 qpair failed and we were unable to recover it. 00:22:23.924 [2024-05-15 01:09:36.250556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.250788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.250813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.924 qpair failed and we were unable to recover it. 00:22:23.924 [2024-05-15 01:09:36.251017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.251462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.251510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.924 qpair failed and we were unable to recover it. 00:22:23.924 [2024-05-15 01:09:36.251734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.251945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.251971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.924 qpair failed and we were unable to recover it. 00:22:23.924 [2024-05-15 01:09:36.252207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.252411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.252454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.924 qpair failed and we were unable to recover it. 00:22:23.924 [2024-05-15 01:09:36.252707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.252910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.252944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.924 qpair failed and we were unable to recover it. 00:22:23.924 [2024-05-15 01:09:36.253203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.253437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.253465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.924 qpair failed and we were unable to recover it. 00:22:23.924 [2024-05-15 01:09:36.253696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.253969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.253995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.924 qpair failed and we were unable to recover it. 00:22:23.924 [2024-05-15 01:09:36.254216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.254593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.254655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.924 qpair failed and we were unable to recover it. 00:22:23.924 [2024-05-15 01:09:36.254855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.255076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.255102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.924 qpair failed and we were unable to recover it. 00:22:23.924 [2024-05-15 01:09:36.255292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.255651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.255693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.924 qpair failed and we were unable to recover it. 00:22:23.924 [2024-05-15 01:09:36.256003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.256218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.256242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.924 qpair failed and we were unable to recover it. 00:22:23.924 [2024-05-15 01:09:36.256482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.256765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.256815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.924 qpair failed and we were unable to recover it. 00:22:23.924 [2024-05-15 01:09:36.257005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.257247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.257290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.924 qpair failed and we were unable to recover it. 00:22:23.924 [2024-05-15 01:09:36.257769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.258019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.258062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.924 qpair failed and we were unable to recover it. 00:22:23.924 [2024-05-15 01:09:36.258321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.258754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.258802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.924 qpair failed and we were unable to recover it. 00:22:23.924 [2024-05-15 01:09:36.259028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.259270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.259314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.924 qpair failed and we were unable to recover it. 00:22:23.924 [2024-05-15 01:09:36.259493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.259715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.259739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.924 qpair failed and we were unable to recover it. 00:22:23.924 [2024-05-15 01:09:36.259958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.260178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.260228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.924 qpair failed and we were unable to recover it. 00:22:23.924 [2024-05-15 01:09:36.260479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.260684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.260725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.924 qpair failed and we were unable to recover it. 00:22:23.924 [2024-05-15 01:09:36.260924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.261116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.261158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.924 qpair failed and we were unable to recover it. 00:22:23.924 [2024-05-15 01:09:36.261474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.261777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.261819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.924 qpair failed and we were unable to recover it. 00:22:23.924 [2024-05-15 01:09:36.262109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.262344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.262372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.924 qpair failed and we were unable to recover it. 00:22:23.924 [2024-05-15 01:09:36.262570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.262805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.262830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.924 qpair failed and we were unable to recover it. 00:22:23.924 [2024-05-15 01:09:36.263049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.263291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.263346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.924 qpair failed and we were unable to recover it. 00:22:23.924 [2024-05-15 01:09:36.263565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.263774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.263799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.924 qpair failed and we were unable to recover it. 00:22:23.924 [2024-05-15 01:09:36.263981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.264322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.264379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.924 qpair failed and we were unable to recover it. 00:22:23.924 [2024-05-15 01:09:36.264612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.924 [2024-05-15 01:09:36.264814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.264839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.925 qpair failed and we were unable to recover it. 00:22:23.925 [2024-05-15 01:09:36.265049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.265314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.265361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.925 qpair failed and we were unable to recover it. 00:22:23.925 [2024-05-15 01:09:36.265590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.265840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.265880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.925 qpair failed and we were unable to recover it. 00:22:23.925 [2024-05-15 01:09:36.266105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.266341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.266370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.925 qpair failed and we were unable to recover it. 00:22:23.925 [2024-05-15 01:09:36.266615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.266824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.266848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.925 qpair failed and we were unable to recover it. 00:22:23.925 [2024-05-15 01:09:36.267105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.267394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.267446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.925 qpair failed and we were unable to recover it. 00:22:23.925 [2024-05-15 01:09:36.267620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.267804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.267830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.925 qpair failed and we were unable to recover it. 00:22:23.925 [2024-05-15 01:09:36.268008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.268300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.268347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.925 qpair failed and we were unable to recover it. 00:22:23.925 [2024-05-15 01:09:36.268593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.268795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.268820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.925 qpair failed and we were unable to recover it. 00:22:23.925 [2024-05-15 01:09:36.269041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.269347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.269400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.925 qpair failed and we were unable to recover it. 00:22:23.925 [2024-05-15 01:09:36.269620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.269817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.269843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.925 qpair failed and we were unable to recover it. 00:22:23.925 [2024-05-15 01:09:36.270047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.270257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.270300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.925 qpair failed and we were unable to recover it. 00:22:23.925 [2024-05-15 01:09:36.270519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.270686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.270711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.925 qpair failed and we were unable to recover it. 00:22:23.925 [2024-05-15 01:09:36.270911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.271133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.271178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.925 qpair failed and we were unable to recover it. 00:22:23.925 [2024-05-15 01:09:36.271498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.271825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.271867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.925 qpair failed and we were unable to recover it. 00:22:23.925 [2024-05-15 01:09:36.272062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.272284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.272326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.925 qpair failed and we were unable to recover it. 00:22:23.925 [2024-05-15 01:09:36.272554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.272775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.272799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.925 qpair failed and we were unable to recover it. 00:22:23.925 [2024-05-15 01:09:36.273046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.273284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.273328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.925 qpair failed and we were unable to recover it. 00:22:23.925 [2024-05-15 01:09:36.273550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.273783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.273808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.925 qpair failed and we were unable to recover it. 00:22:23.925 [2024-05-15 01:09:36.273994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.274229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.274256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.925 qpair failed and we were unable to recover it. 00:22:23.925 [2024-05-15 01:09:36.274530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.274780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.274805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.925 qpair failed and we were unable to recover it. 00:22:23.925 [2024-05-15 01:09:36.275012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.275245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.275287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.925 qpair failed and we were unable to recover it. 00:22:23.925 [2024-05-15 01:09:36.275514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.275750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.275775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.925 qpair failed and we were unable to recover it. 00:22:23.925 [2024-05-15 01:09:36.275985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.276366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.276429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.925 qpair failed and we were unable to recover it. 00:22:23.925 [2024-05-15 01:09:36.276651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.276855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.276880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.925 qpair failed and we were unable to recover it. 00:22:23.925 [2024-05-15 01:09:36.277104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.277311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.277352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.925 qpair failed and we were unable to recover it. 00:22:23.925 [2024-05-15 01:09:36.277558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.277736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.277760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.925 qpair failed and we were unable to recover it. 00:22:23.925 [2024-05-15 01:09:36.277980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.278162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.278205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.925 qpair failed and we were unable to recover it. 00:22:23.925 [2024-05-15 01:09:36.278481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.278894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.278957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.925 qpair failed and we were unable to recover it. 00:22:23.925 [2024-05-15 01:09:36.279211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.279438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.925 [2024-05-15 01:09:36.279483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:23.925 qpair failed and we were unable to recover it. 00:22:24.197 [2024-05-15 01:09:36.279705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.279896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.279921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.197 qpair failed and we were unable to recover it. 00:22:24.197 [2024-05-15 01:09:36.280096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.280278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.280320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.197 qpair failed and we were unable to recover it. 00:22:24.197 [2024-05-15 01:09:36.280543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.280777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.280803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.197 qpair failed and we were unable to recover it. 00:22:24.197 [2024-05-15 01:09:36.281017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.281255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.281297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.197 qpair failed and we were unable to recover it. 00:22:24.197 [2024-05-15 01:09:36.281515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.281687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.281712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.197 qpair failed and we were unable to recover it. 00:22:24.197 [2024-05-15 01:09:36.281877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.282099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.282144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.197 qpair failed and we were unable to recover it. 00:22:24.197 [2024-05-15 01:09:36.282373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.282578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.282621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.197 qpair failed and we were unable to recover it. 00:22:24.197 [2024-05-15 01:09:36.282814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.283051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.283093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.197 qpair failed and we were unable to recover it. 00:22:24.197 [2024-05-15 01:09:36.283306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.283634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.283689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.197 qpair failed and we were unable to recover it. 00:22:24.197 [2024-05-15 01:09:36.283892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.284066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.284091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.197 qpair failed and we were unable to recover it. 00:22:24.197 [2024-05-15 01:09:36.284338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.284570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.284613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.197 qpair failed and we were unable to recover it. 00:22:24.197 [2024-05-15 01:09:36.284840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.285042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.285068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.197 qpair failed and we were unable to recover it. 00:22:24.197 [2024-05-15 01:09:36.285270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.285482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.285524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.197 qpair failed and we were unable to recover it. 00:22:24.197 [2024-05-15 01:09:36.285684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.285851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.285876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.197 qpair failed and we were unable to recover it. 00:22:24.197 [2024-05-15 01:09:36.286096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.286336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.286364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.197 qpair failed and we were unable to recover it. 00:22:24.197 [2024-05-15 01:09:36.286613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.286798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.286824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.197 qpair failed and we were unable to recover it. 00:22:24.197 [2024-05-15 01:09:36.287035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.287242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.287284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.197 qpair failed and we were unable to recover it. 00:22:24.197 [2024-05-15 01:09:36.287499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.287673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.287699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.197 qpair failed and we were unable to recover it. 00:22:24.197 [2024-05-15 01:09:36.287889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.288089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.288133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.197 qpair failed and we were unable to recover it. 00:22:24.197 [2024-05-15 01:09:36.288360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.288541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.288583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.197 qpair failed and we were unable to recover it. 00:22:24.197 [2024-05-15 01:09:36.288749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.288943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.288969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.197 qpair failed and we were unable to recover it. 00:22:24.197 [2024-05-15 01:09:36.289187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.289450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.289492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.197 qpair failed and we were unable to recover it. 00:22:24.197 [2024-05-15 01:09:36.289742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.197 [2024-05-15 01:09:36.289938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.289964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.198 qpair failed and we were unable to recover it. 00:22:24.198 [2024-05-15 01:09:36.290205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.290534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.290586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.198 qpair failed and we were unable to recover it. 00:22:24.198 [2024-05-15 01:09:36.290803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.291037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.291064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.198 qpair failed and we were unable to recover it. 00:22:24.198 [2024-05-15 01:09:36.291312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.291608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.291657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.198 qpair failed and we were unable to recover it. 00:22:24.198 [2024-05-15 01:09:36.291852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.292038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.292064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.198 qpair failed and we were unable to recover it. 00:22:24.198 [2024-05-15 01:09:36.292285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.292511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.292554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.198 qpair failed and we were unable to recover it. 00:22:24.198 [2024-05-15 01:09:36.292723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.292905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.292936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.198 qpair failed and we were unable to recover it. 00:22:24.198 [2024-05-15 01:09:36.293178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.293401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.293447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.198 qpair failed and we were unable to recover it. 00:22:24.198 [2024-05-15 01:09:36.293630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.293838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.293863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.198 qpair failed and we were unable to recover it. 00:22:24.198 [2024-05-15 01:09:36.294048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.294233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.294276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.198 qpair failed and we were unable to recover it. 00:22:24.198 [2024-05-15 01:09:36.294526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.294976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.295003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.198 qpair failed and we were unable to recover it. 00:22:24.198 [2024-05-15 01:09:36.295200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.295415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.295460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.198 qpair failed and we were unable to recover it. 00:22:24.198 [2024-05-15 01:09:36.295687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.295907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.295939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.198 qpair failed and we were unable to recover it. 00:22:24.198 [2024-05-15 01:09:36.296128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.296314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.296357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.198 qpair failed and we were unable to recover it. 00:22:24.198 [2024-05-15 01:09:36.296572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.296782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.296807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.198 qpair failed and we were unable to recover it. 00:22:24.198 [2024-05-15 01:09:36.296987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.297185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.297228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.198 qpair failed and we were unable to recover it. 00:22:24.198 [2024-05-15 01:09:36.297444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.297705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.297747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.198 qpair failed and we were unable to recover it. 00:22:24.198 [2024-05-15 01:09:36.297942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.298158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.298203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.198 qpair failed and we were unable to recover it. 00:22:24.198 [2024-05-15 01:09:36.298454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.298806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.298871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.198 qpair failed and we were unable to recover it. 00:22:24.198 [2024-05-15 01:09:36.299094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.299333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.299360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.198 qpair failed and we were unable to recover it. 00:22:24.198 [2024-05-15 01:09:36.299607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.299783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.299808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.198 qpair failed and we were unable to recover it. 00:22:24.198 [2024-05-15 01:09:36.299997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.300225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.300266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.198 qpair failed and we were unable to recover it. 00:22:24.198 [2024-05-15 01:09:36.300476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.300753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.300779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.198 qpair failed and we were unable to recover it. 00:22:24.198 [2024-05-15 01:09:36.300978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.301240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.301293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.198 qpair failed and we were unable to recover it. 00:22:24.198 [2024-05-15 01:09:36.301511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.301740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.301782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.198 qpair failed and we were unable to recover it. 00:22:24.198 [2024-05-15 01:09:36.301942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.302125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.302168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.198 qpair failed and we were unable to recover it. 00:22:24.198 [2024-05-15 01:09:36.302383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.302717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.302786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.198 qpair failed and we were unable to recover it. 00:22:24.198 [2024-05-15 01:09:36.303014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.303333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.303387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.198 qpair failed and we were unable to recover it. 00:22:24.198 [2024-05-15 01:09:36.303596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.303854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.303893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.198 qpair failed and we were unable to recover it. 00:22:24.198 [2024-05-15 01:09:36.304132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.304428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.198 [2024-05-15 01:09:36.304469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.198 qpair failed and we were unable to recover it. 00:22:24.199 [2024-05-15 01:09:36.304786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.305007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.305032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.199 qpair failed and we were unable to recover it. 00:22:24.199 [2024-05-15 01:09:36.305285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.305592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.305640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.199 qpair failed and we were unable to recover it. 00:22:24.199 [2024-05-15 01:09:36.305834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.306098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.306142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.199 qpair failed and we were unable to recover it. 00:22:24.199 [2024-05-15 01:09:36.306367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.306677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.306736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.199 qpair failed and we were unable to recover it. 00:22:24.199 [2024-05-15 01:09:36.306941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.307107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.307132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.199 qpair failed and we were unable to recover it. 00:22:24.199 [2024-05-15 01:09:36.307371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.307639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.307681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.199 qpair failed and we were unable to recover it. 00:22:24.199 [2024-05-15 01:09:36.307853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.308082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.308108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.199 qpair failed and we were unable to recover it. 00:22:24.199 [2024-05-15 01:09:36.308355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.308649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.308693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.199 qpair failed and we were unable to recover it. 00:22:24.199 [2024-05-15 01:09:36.308972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.309193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.309236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.199 qpair failed and we were unable to recover it. 00:22:24.199 [2024-05-15 01:09:36.309485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.309796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.309855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.199 qpair failed and we were unable to recover it. 00:22:24.199 [2024-05-15 01:09:36.310093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.310383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.310424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.199 qpair failed and we were unable to recover it. 00:22:24.199 [2024-05-15 01:09:36.310646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.310878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.310902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.199 qpair failed and we were unable to recover it. 00:22:24.199 [2024-05-15 01:09:36.311148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.311367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.311409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.199 qpair failed and we were unable to recover it. 00:22:24.199 [2024-05-15 01:09:36.311630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.311817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.311841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.199 qpair failed and we were unable to recover it. 00:22:24.199 [2024-05-15 01:09:36.312040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.312225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.312267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.199 qpair failed and we were unable to recover it. 00:22:24.199 [2024-05-15 01:09:36.312482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.312900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.312956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.199 qpair failed and we were unable to recover it. 00:22:24.199 [2024-05-15 01:09:36.313204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.313521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.313580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.199 qpair failed and we were unable to recover it. 00:22:24.199 [2024-05-15 01:09:36.313841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.314052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.314078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.199 qpair failed and we were unable to recover it. 00:22:24.199 [2024-05-15 01:09:36.314300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.314557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.314599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.199 qpair failed and we were unable to recover it. 00:22:24.199 [2024-05-15 01:09:36.314754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.314948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.314974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.199 qpair failed and we were unable to recover it. 00:22:24.199 [2024-05-15 01:09:36.315163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.315500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.315553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.199 qpair failed and we were unable to recover it. 00:22:24.199 [2024-05-15 01:09:36.315743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.315966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.315990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.199 qpair failed and we were unable to recover it. 00:22:24.199 [2024-05-15 01:09:36.316214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.316405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.316448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.199 qpair failed and we were unable to recover it. 00:22:24.199 [2024-05-15 01:09:36.316634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.316834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.316859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.199 qpair failed and we were unable to recover it. 00:22:24.199 [2024-05-15 01:09:36.317084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.317297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.317339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.199 qpair failed and we were unable to recover it. 00:22:24.199 [2024-05-15 01:09:36.317580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.317771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.317795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.199 qpair failed and we were unable to recover it. 00:22:24.199 [2024-05-15 01:09:36.318009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.318272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.318314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.199 qpair failed and we were unable to recover it. 00:22:24.199 [2024-05-15 01:09:36.318512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.318720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.318762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.199 qpair failed and we were unable to recover it. 00:22:24.199 [2024-05-15 01:09:36.318991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.319211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.319256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.199 qpair failed and we were unable to recover it. 00:22:24.199 [2024-05-15 01:09:36.319471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.199 [2024-05-15 01:09:36.319799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.319861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.200 qpair failed and we were unable to recover it. 00:22:24.200 [2024-05-15 01:09:36.320122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.320405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.320458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.200 qpair failed and we were unable to recover it. 00:22:24.200 [2024-05-15 01:09:36.320679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.320879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.320903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.200 qpair failed and we were unable to recover it. 00:22:24.200 [2024-05-15 01:09:36.321078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.321286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.321329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.200 qpair failed and we were unable to recover it. 00:22:24.200 [2024-05-15 01:09:36.321539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.321812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.321854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.200 qpair failed and we were unable to recover it. 00:22:24.200 [2024-05-15 01:09:36.322040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.322253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.322295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.200 qpair failed and we were unable to recover it. 00:22:24.200 [2024-05-15 01:09:36.322488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.322732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.322773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.200 qpair failed and we were unable to recover it. 00:22:24.200 [2024-05-15 01:09:36.322946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.323134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.323160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.200 qpair failed and we were unable to recover it. 00:22:24.200 [2024-05-15 01:09:36.323405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.323734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.323794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.200 qpair failed and we were unable to recover it. 00:22:24.200 [2024-05-15 01:09:36.324013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.324222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.324263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.200 qpair failed and we were unable to recover it. 00:22:24.200 [2024-05-15 01:09:36.324455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.324715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.324757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.200 qpair failed and we were unable to recover it. 00:22:24.200 [2024-05-15 01:09:36.324943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.325142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.325184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.200 qpair failed and we were unable to recover it. 00:22:24.200 [2024-05-15 01:09:36.325401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.325665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.325706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.200 qpair failed and we were unable to recover it. 00:22:24.200 [2024-05-15 01:09:36.325898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.326101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.326128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.200 qpair failed and we were unable to recover it. 00:22:24.200 [2024-05-15 01:09:36.326351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.326606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.326649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.200 qpair failed and we were unable to recover it. 00:22:24.200 [2024-05-15 01:09:36.326871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.327084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.327127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.200 qpair failed and we were unable to recover it. 00:22:24.200 [2024-05-15 01:09:36.327319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.327580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.327621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.200 qpair failed and we were unable to recover it. 00:22:24.200 [2024-05-15 01:09:36.327790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.327977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.328006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.200 qpair failed and we were unable to recover it. 00:22:24.200 [2024-05-15 01:09:36.328213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.328447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.328489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.200 qpair failed and we were unable to recover it. 00:22:24.200 [2024-05-15 01:09:36.328706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.328903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.328928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.200 qpair failed and we were unable to recover it. 00:22:24.200 [2024-05-15 01:09:36.329095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.329338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.329380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.200 qpair failed and we were unable to recover it. 00:22:24.200 [2024-05-15 01:09:36.329609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.329843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.329872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.200 qpair failed and we were unable to recover it. 00:22:24.200 [2024-05-15 01:09:36.330035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.330247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.330289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.200 qpair failed and we were unable to recover it. 00:22:24.200 [2024-05-15 01:09:36.330539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.330749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.330775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.200 qpair failed and we were unable to recover it. 00:22:24.200 [2024-05-15 01:09:36.330971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.331160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.331185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.200 qpair failed and we were unable to recover it. 00:22:24.200 [2024-05-15 01:09:36.331374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.331566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.331608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.200 qpair failed and we were unable to recover it. 00:22:24.200 [2024-05-15 01:09:36.331778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.331991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.332016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.200 qpair failed and we were unable to recover it. 00:22:24.200 [2024-05-15 01:09:36.332233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.332465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.332493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.200 qpair failed and we were unable to recover it. 00:22:24.200 [2024-05-15 01:09:36.332756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.332986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.333011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.200 qpair failed and we were unable to recover it. 00:22:24.200 [2024-05-15 01:09:36.333193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.333453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.200 [2024-05-15 01:09:36.333495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.200 qpair failed and we were unable to recover it. 00:22:24.201 [2024-05-15 01:09:36.333742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.333946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.333971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.201 qpair failed and we were unable to recover it. 00:22:24.201 [2024-05-15 01:09:36.334161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.334447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.334494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.201 qpair failed and we were unable to recover it. 00:22:24.201 [2024-05-15 01:09:36.334714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.334897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.334922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.201 qpair failed and we were unable to recover it. 00:22:24.201 [2024-05-15 01:09:36.335129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.335342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.335385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.201 qpair failed and we were unable to recover it. 00:22:24.201 [2024-05-15 01:09:36.335595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.335800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.335825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.201 qpair failed and we were unable to recover it. 00:22:24.201 [2024-05-15 01:09:36.336035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.336245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.336287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.201 qpair failed and we were unable to recover it. 00:22:24.201 [2024-05-15 01:09:36.336536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.336882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.336934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.201 qpair failed and we were unable to recover it. 00:22:24.201 [2024-05-15 01:09:36.337123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.337329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.337370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.201 qpair failed and we were unable to recover it. 00:22:24.201 [2024-05-15 01:09:36.337553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.337819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.337846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.201 qpair failed and we were unable to recover it. 00:22:24.201 [2024-05-15 01:09:36.338014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.338287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.338336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.201 qpair failed and we were unable to recover it. 00:22:24.201 [2024-05-15 01:09:36.338557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.338764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.338788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.201 qpair failed and we were unable to recover it. 00:22:24.201 [2024-05-15 01:09:36.338995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.339257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.339302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.201 qpair failed and we were unable to recover it. 00:22:24.201 [2024-05-15 01:09:36.339548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.339727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.339752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.201 qpair failed and we were unable to recover it. 00:22:24.201 [2024-05-15 01:09:36.339949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.340212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.340262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.201 qpair failed and we were unable to recover it. 00:22:24.201 [2024-05-15 01:09:36.340476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.340684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.340727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.201 qpair failed and we were unable to recover it. 00:22:24.201 [2024-05-15 01:09:36.340948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.341129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.341171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.201 qpair failed and we were unable to recover it. 00:22:24.201 [2024-05-15 01:09:36.341387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.341613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.341656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.201 qpair failed and we were unable to recover it. 00:22:24.201 [2024-05-15 01:09:36.341866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.342089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.342133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.201 qpair failed and we were unable to recover it. 00:22:24.201 [2024-05-15 01:09:36.342335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.342656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.342696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.201 qpair failed and we were unable to recover it. 00:22:24.201 [2024-05-15 01:09:36.342910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.343113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.343139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.201 qpair failed and we were unable to recover it. 00:22:24.201 [2024-05-15 01:09:36.343383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.343609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.343651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.201 qpair failed and we were unable to recover it. 00:22:24.201 [2024-05-15 01:09:36.343836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.343993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.344018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.201 qpair failed and we were unable to recover it. 00:22:24.201 [2024-05-15 01:09:36.344251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.344483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.201 [2024-05-15 01:09:36.344524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.201 qpair failed and we were unable to recover it. 00:22:24.201 [2024-05-15 01:09:36.344784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.345036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.345076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.202 qpair failed and we were unable to recover it. 00:22:24.202 [2024-05-15 01:09:36.345305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.345589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.345639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.202 qpair failed and we were unable to recover it. 00:22:24.202 [2024-05-15 01:09:36.345828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.346125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.346152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.202 qpair failed and we were unable to recover it. 00:22:24.202 [2024-05-15 01:09:36.346334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.346655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.346704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.202 qpair failed and we were unable to recover it. 00:22:24.202 [2024-05-15 01:09:36.346933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.347103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.347128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.202 qpair failed and we were unable to recover it. 00:22:24.202 [2024-05-15 01:09:36.347341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.347605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.347647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.202 qpair failed and we were unable to recover it. 00:22:24.202 [2024-05-15 01:09:36.347846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.348051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.348077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.202 qpair failed and we were unable to recover it. 00:22:24.202 [2024-05-15 01:09:36.348286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.348554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.348596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.202 qpair failed and we were unable to recover it. 00:22:24.202 [2024-05-15 01:09:36.348751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.348942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.348967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.202 qpair failed and we were unable to recover it. 00:22:24.202 [2024-05-15 01:09:36.349176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.349409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.349452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.202 qpair failed and we were unable to recover it. 00:22:24.202 [2024-05-15 01:09:36.349681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.349901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.349926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.202 qpair failed and we were unable to recover it. 00:22:24.202 [2024-05-15 01:09:36.350120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.350348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.350390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.202 qpair failed and we were unable to recover it. 00:22:24.202 [2024-05-15 01:09:36.350587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.350818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.350843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.202 qpair failed and we were unable to recover it. 00:22:24.202 [2024-05-15 01:09:36.351082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.351396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.351451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.202 qpair failed and we were unable to recover it. 00:22:24.202 [2024-05-15 01:09:36.351658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.351855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.351880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.202 qpair failed and we were unable to recover it. 00:22:24.202 [2024-05-15 01:09:36.352076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.352386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.352414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.202 qpair failed and we were unable to recover it. 00:22:24.202 [2024-05-15 01:09:36.352753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.353034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.353059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.202 qpair failed and we were unable to recover it. 00:22:24.202 [2024-05-15 01:09:36.353294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.353527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.353569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.202 qpair failed and we were unable to recover it. 00:22:24.202 [2024-05-15 01:09:36.353798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.353997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.354023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.202 qpair failed and we were unable to recover it. 00:22:24.202 [2024-05-15 01:09:36.354288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.354574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.354616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.202 qpair failed and we were unable to recover it. 00:22:24.202 [2024-05-15 01:09:36.354787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.354996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.355025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.202 qpair failed and we were unable to recover it. 00:22:24.202 [2024-05-15 01:09:36.355271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.355711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.355761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.202 qpair failed and we were unable to recover it. 00:22:24.202 [2024-05-15 01:09:36.355946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.356116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.356141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.202 qpair failed and we were unable to recover it. 00:22:24.202 [2024-05-15 01:09:36.356358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.356696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.356748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.202 qpair failed and we were unable to recover it. 00:22:24.202 [2024-05-15 01:09:36.356915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.357125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.357151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.202 qpair failed and we were unable to recover it. 00:22:24.202 [2024-05-15 01:09:36.357370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.357636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.357678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.202 qpair failed and we were unable to recover it. 00:22:24.202 [2024-05-15 01:09:36.357866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.358049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.358075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.202 qpair failed and we were unable to recover it. 00:22:24.202 [2024-05-15 01:09:36.358267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.358564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.358591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.202 qpair failed and we were unable to recover it. 00:22:24.202 [2024-05-15 01:09:36.358803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.358992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.359018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.202 qpair failed and we were unable to recover it. 00:22:24.202 [2024-05-15 01:09:36.359267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.359624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.202 [2024-05-15 01:09:36.359674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.202 qpair failed and we were unable to recover it. 00:22:24.203 [2024-05-15 01:09:36.359897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.360065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.360091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.203 qpair failed and we were unable to recover it. 00:22:24.203 [2024-05-15 01:09:36.360311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.360542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.360585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.203 qpair failed and we were unable to recover it. 00:22:24.203 [2024-05-15 01:09:36.360779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.360995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.361021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.203 qpair failed and we were unable to recover it. 00:22:24.203 [2024-05-15 01:09:36.361231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.361507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.361559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.203 qpair failed and we were unable to recover it. 00:22:24.203 [2024-05-15 01:09:36.361769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.362007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.362033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.203 qpair failed and we were unable to recover it. 00:22:24.203 [2024-05-15 01:09:36.362249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.362488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.362516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.203 qpair failed and we were unable to recover it. 00:22:24.203 [2024-05-15 01:09:36.362747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.362956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.362982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.203 qpair failed and we were unable to recover it. 00:22:24.203 [2024-05-15 01:09:36.363198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.363430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.363471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.203 qpair failed and we were unable to recover it. 00:22:24.203 [2024-05-15 01:09:36.363720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.363957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.363997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.203 qpair failed and we were unable to recover it. 00:22:24.203 [2024-05-15 01:09:36.364210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.364546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.364603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.203 qpair failed and we were unable to recover it. 00:22:24.203 [2024-05-15 01:09:36.364771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.364984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.365010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.203 qpair failed and we were unable to recover it. 00:22:24.203 [2024-05-15 01:09:36.365190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.365418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.365461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.203 qpair failed and we were unable to recover it. 00:22:24.203 [2024-05-15 01:09:36.365682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.365920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.365956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.203 qpair failed and we were unable to recover it. 00:22:24.203 [2024-05-15 01:09:36.366177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.366450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.366506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.203 qpair failed and we were unable to recover it. 00:22:24.203 [2024-05-15 01:09:36.366758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.366944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.366971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.203 qpair failed and we were unable to recover it. 00:22:24.203 [2024-05-15 01:09:36.367187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.367403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.367445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.203 qpair failed and we were unable to recover it. 00:22:24.203 [2024-05-15 01:09:36.367645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.367817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.367842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.203 qpair failed and we were unable to recover it. 00:22:24.203 [2024-05-15 01:09:36.368029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.368215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.368257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.203 qpair failed and we were unable to recover it. 00:22:24.203 [2024-05-15 01:09:36.368465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.368724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.368766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.203 qpair failed and we were unable to recover it. 00:22:24.203 [2024-05-15 01:09:36.368996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.369301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.369361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.203 qpair failed and we were unable to recover it. 00:22:24.203 [2024-05-15 01:09:36.369608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.369814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.369839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.203 qpair failed and we were unable to recover it. 00:22:24.203 [2024-05-15 01:09:36.370047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.370288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.370330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.203 qpair failed and we were unable to recover it. 00:22:24.203 [2024-05-15 01:09:36.370543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.370750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.370776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.203 qpair failed and we were unable to recover it. 00:22:24.203 [2024-05-15 01:09:36.370940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.371165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.371219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.203 qpair failed and we were unable to recover it. 00:22:24.203 [2024-05-15 01:09:36.371437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.371637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.371680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.203 qpair failed and we were unable to recover it. 00:22:24.203 [2024-05-15 01:09:36.371876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.372082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.372107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.203 qpair failed and we were unable to recover it. 00:22:24.203 [2024-05-15 01:09:36.372290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.372488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.372529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.203 qpair failed and we were unable to recover it. 00:22:24.203 [2024-05-15 01:09:36.372750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.373001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.373027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.203 qpair failed and we were unable to recover it. 00:22:24.203 [2024-05-15 01:09:36.373267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.373565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.373608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.203 qpair failed and we were unable to recover it. 00:22:24.203 [2024-05-15 01:09:36.373826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.203 [2024-05-15 01:09:36.374038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.374065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.204 qpair failed and we were unable to recover it. 00:22:24.204 [2024-05-15 01:09:36.374442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.374807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.374858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.204 qpair failed and we were unable to recover it. 00:22:24.204 [2024-05-15 01:09:36.375073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.375316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.375341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.204 qpair failed and we were unable to recover it. 00:22:24.204 [2024-05-15 01:09:36.375569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.375761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.375786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.204 qpair failed and we were unable to recover it. 00:22:24.204 [2024-05-15 01:09:36.375995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.376201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.376243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.204 qpair failed and we were unable to recover it. 00:22:24.204 [2024-05-15 01:09:36.376442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.376675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.376700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.204 qpair failed and we were unable to recover it. 00:22:24.204 [2024-05-15 01:09:36.376875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.377152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.377201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.204 qpair failed and we were unable to recover it. 00:22:24.204 [2024-05-15 01:09:36.377446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.377723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.377749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.204 qpair failed and we were unable to recover it. 00:22:24.204 [2024-05-15 01:09:36.377958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.378176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.378202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.204 qpair failed and we were unable to recover it. 00:22:24.204 [2024-05-15 01:09:36.378446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.378735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.378780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.204 qpair failed and we were unable to recover it. 00:22:24.204 [2024-05-15 01:09:36.378980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.379191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.379219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.204 qpair failed and we were unable to recover it. 00:22:24.204 [2024-05-15 01:09:36.379501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.379702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.379744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.204 qpair failed and we were unable to recover it. 00:22:24.204 [2024-05-15 01:09:36.379911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.380109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.380136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.204 qpair failed and we were unable to recover it. 00:22:24.204 [2024-05-15 01:09:36.380343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.380607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.380650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.204 qpair failed and we were unable to recover it. 00:22:24.204 [2024-05-15 01:09:36.380852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.381081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.381106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.204 qpair failed and we were unable to recover it. 00:22:24.204 [2024-05-15 01:09:36.381339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.381680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.381732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.204 qpair failed and we were unable to recover it. 00:22:24.204 [2024-05-15 01:09:36.381959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.382193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.382239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.204 qpair failed and we were unable to recover it. 00:22:24.204 [2024-05-15 01:09:36.382501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.382758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.382800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.204 qpair failed and we were unable to recover it. 00:22:24.204 [2024-05-15 01:09:36.382987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.383179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.383222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.204 qpair failed and we were unable to recover it. 00:22:24.204 [2024-05-15 01:09:36.383482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.383694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.383719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.204 qpair failed and we were unable to recover it. 00:22:24.204 [2024-05-15 01:09:36.383921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.384135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.384178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.204 qpair failed and we were unable to recover it. 00:22:24.204 [2024-05-15 01:09:36.384393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.384645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.384687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.204 qpair failed and we were unable to recover it. 00:22:24.204 [2024-05-15 01:09:36.384885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.385098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.385124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.204 qpair failed and we were unable to recover it. 00:22:24.204 [2024-05-15 01:09:36.385345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.385594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.385637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.204 qpair failed and we were unable to recover it. 00:22:24.204 [2024-05-15 01:09:36.385865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.386029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.386055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.204 qpair failed and we were unable to recover it. 00:22:24.204 [2024-05-15 01:09:36.386246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.386482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.386510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.204 qpair failed and we were unable to recover it. 00:22:24.204 [2024-05-15 01:09:36.386723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.386894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.386920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.204 qpair failed and we were unable to recover it. 00:22:24.204 [2024-05-15 01:09:36.387105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.387399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.387452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.204 qpair failed and we were unable to recover it. 00:22:24.204 [2024-05-15 01:09:36.387641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.387883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.387907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.204 qpair failed and we were unable to recover it. 00:22:24.204 [2024-05-15 01:09:36.388147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.388387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.388416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.204 qpair failed and we were unable to recover it. 00:22:24.204 [2024-05-15 01:09:36.388635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.204 [2024-05-15 01:09:36.388844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.388868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.205 qpair failed and we were unable to recover it. 00:22:24.205 [2024-05-15 01:09:36.389093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.389389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.389453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.205 qpair failed and we were unable to recover it. 00:22:24.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 1347656 Killed "${NVMF_APP[@]}" "$@" 00:22:24.205 [2024-05-15 01:09:36.389676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.389882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.389908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.205 qpair failed and we were unable to recover it. 00:22:24.205 01:09:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:22:24.205 [2024-05-15 01:09:36.390114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 01:09:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:22:24.205 [2024-05-15 01:09:36.390357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.390401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.205 qpair failed and we were unable to recover it. 00:22:24.205 01:09:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:24.205 01:09:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:24.205 [2024-05-15 01:09:36.390627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 01:09:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:24.205 [2024-05-15 01:09:36.390867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.390892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.205 qpair failed and we were unable to recover it. 00:22:24.205 [2024-05-15 01:09:36.391102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.391282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.391324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.205 qpair failed and we were unable to recover it. 00:22:24.205 [2024-05-15 01:09:36.391553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.391739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.391763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.205 qpair failed and we were unable to recover it. 00:22:24.205 [2024-05-15 01:09:36.391988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.392200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.392245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.205 qpair failed and we were unable to recover it. 00:22:24.205 [2024-05-15 01:09:36.392467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.392700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.392747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.205 qpair failed and we were unable to recover it. 00:22:24.205 [2024-05-15 01:09:36.392974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.393134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.393160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.205 qpair failed and we were unable to recover it. 00:22:24.205 [2024-05-15 01:09:36.393341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.393635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.393678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.205 qpair failed and we were unable to recover it. 00:22:24.205 [2024-05-15 01:09:36.393985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.394145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.394172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.205 qpair failed and we were unable to recover it. 00:22:24.205 [2024-05-15 01:09:36.394397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 01:09:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1348224 00:22:24.205 [2024-05-15 01:09:36.394615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.394659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.205 qpair failed and we were unable to recover it. 00:22:24.205 01:09:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1348224 00:22:24.205 01:09:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:22:24.205 [2024-05-15 01:09:36.394884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 01:09:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1348224 ']' 00:22:24.205 [2024-05-15 01:09:36.395072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.395098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.205 qpair failed and we were unable to recover it. 00:22:24.205 01:09:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.205 01:09:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:24.205 [2024-05-15 01:09:36.395294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 01:09:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.205 01:09:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:24.205 [2024-05-15 01:09:36.395600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.395629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.205 qpair failed and we were unable to recover it. 00:22:24.205 01:09:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:24.205 [2024-05-15 01:09:36.395834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.395999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.396025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.205 qpair failed and we were unable to recover it. 00:22:24.205 [2024-05-15 01:09:36.396318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.396587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.396629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.205 qpair failed and we were unable to recover it. 00:22:24.205 [2024-05-15 01:09:36.396867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.397034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.397060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.205 qpair failed and we were unable to recover it. 00:22:24.205 [2024-05-15 01:09:36.397274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.397513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.397557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.205 qpair failed and we were unable to recover it. 00:22:24.205 [2024-05-15 01:09:36.397784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.397998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.398041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.205 qpair failed and we were unable to recover it. 00:22:24.205 [2024-05-15 01:09:36.398230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.398427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.398470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.205 qpair failed and we were unable to recover it. 00:22:24.205 [2024-05-15 01:09:36.398724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.398911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.398950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.205 qpair failed and we were unable to recover it. 00:22:24.205 [2024-05-15 01:09:36.399186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.399372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.399416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.205 qpair failed and we were unable to recover it. 00:22:24.205 [2024-05-15 01:09:36.399627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.399837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.399862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.205 qpair failed and we were unable to recover it. 00:22:24.205 [2024-05-15 01:09:36.400054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.400281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.205 [2024-05-15 01:09:36.400310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.205 qpair failed and we were unable to recover it. 00:22:24.205 [2024-05-15 01:09:36.400557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.400774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.400800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.206 qpair failed and we were unable to recover it. 00:22:24.206 [2024-05-15 01:09:36.401011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.401226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.401269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.206 qpair failed and we were unable to recover it. 00:22:24.206 [2024-05-15 01:09:36.401467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.401678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.401721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.206 qpair failed and we were unable to recover it. 00:22:24.206 [2024-05-15 01:09:36.401911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.402107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.402133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.206 qpair failed and we were unable to recover it. 00:22:24.206 [2024-05-15 01:09:36.402325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.402567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.402613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.206 qpair failed and we were unable to recover it. 00:22:24.206 [2024-05-15 01:09:36.402815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.403005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.403032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.206 qpair failed and we were unable to recover it. 00:22:24.206 [2024-05-15 01:09:36.403255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.403480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.403522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.206 qpair failed and we were unable to recover it. 00:22:24.206 [2024-05-15 01:09:36.403746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.403936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.403962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.206 qpair failed and we were unable to recover it. 00:22:24.206 [2024-05-15 01:09:36.404179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.404380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.404423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.206 qpair failed and we were unable to recover it. 00:22:24.206 [2024-05-15 01:09:36.404587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.404771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.404796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.206 qpair failed and we were unable to recover it. 00:22:24.206 [2024-05-15 01:09:36.404961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.405160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.405203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.206 qpair failed and we were unable to recover it. 00:22:24.206 [2024-05-15 01:09:36.405426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.405664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.405706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.206 qpair failed and we were unable to recover it. 00:22:24.206 [2024-05-15 01:09:36.405923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.406141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.406185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.206 qpair failed and we were unable to recover it. 00:22:24.206 [2024-05-15 01:09:36.406394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.406604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.406647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.206 qpair failed and we were unable to recover it. 00:22:24.206 [2024-05-15 01:09:36.406839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.407125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.407168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.206 qpair failed and we were unable to recover it. 00:22:24.206 [2024-05-15 01:09:36.407398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.407633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.407676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.206 qpair failed and we were unable to recover it. 00:22:24.206 [2024-05-15 01:09:36.407862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.408056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.408082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.206 qpair failed and we were unable to recover it. 00:22:24.206 [2024-05-15 01:09:36.408297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.408535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.408563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.206 qpair failed and we were unable to recover it. 00:22:24.206 [2024-05-15 01:09:36.408741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.408943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.408968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.206 qpair failed and we were unable to recover it. 00:22:24.206 [2024-05-15 01:09:36.409234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.409457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.409502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.206 qpair failed and we were unable to recover it. 00:22:24.206 [2024-05-15 01:09:36.409696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.409885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.409911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.206 qpair failed and we were unable to recover it. 00:22:24.206 [2024-05-15 01:09:36.410154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.410381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.410423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.206 qpair failed and we were unable to recover it. 00:22:24.206 [2024-05-15 01:09:36.410606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.410838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.410863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.206 qpair failed and we were unable to recover it. 00:22:24.206 [2024-05-15 01:09:36.411076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.411276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.411318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.206 qpair failed and we were unable to recover it. 00:22:24.206 [2024-05-15 01:09:36.411563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.206 [2024-05-15 01:09:36.411780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.411805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.207 qpair failed and we were unable to recover it. 00:22:24.207 [2024-05-15 01:09:36.412011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.412209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.412252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.207 qpair failed and we were unable to recover it. 00:22:24.207 [2024-05-15 01:09:36.412497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.412687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.412713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.207 qpair failed and we were unable to recover it. 00:22:24.207 [2024-05-15 01:09:36.412874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.413111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.413154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.207 qpair failed and we were unable to recover it. 00:22:24.207 [2024-05-15 01:09:36.413353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.413659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.413717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.207 qpair failed and we were unable to recover it. 00:22:24.207 [2024-05-15 01:09:36.413880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.414051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.414095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.207 qpair failed and we were unable to recover it. 00:22:24.207 [2024-05-15 01:09:36.414344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.414545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.414573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.207 qpair failed and we were unable to recover it. 00:22:24.207 [2024-05-15 01:09:36.414783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.415030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.415073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.207 qpair failed and we were unable to recover it. 00:22:24.207 [2024-05-15 01:09:36.415271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.415583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.415648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.207 qpair failed and we were unable to recover it. 00:22:24.207 [2024-05-15 01:09:36.415829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.416077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.416121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.207 qpair failed and we were unable to recover it. 00:22:24.207 [2024-05-15 01:09:36.416309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.416506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.416548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.207 qpair failed and we were unable to recover it. 00:22:24.207 [2024-05-15 01:09:36.416733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.416951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.416977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.207 qpair failed and we were unable to recover it. 00:22:24.207 [2024-05-15 01:09:36.417199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.417402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.417444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.207 qpair failed and we were unable to recover it. 00:22:24.207 [2024-05-15 01:09:36.417687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.417862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.417887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.207 qpair failed and we were unable to recover it. 00:22:24.207 [2024-05-15 01:09:36.418127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.418336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.418380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.207 qpair failed and we were unable to recover it. 00:22:24.207 [2024-05-15 01:09:36.418640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.418827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.418852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.207 qpair failed and we were unable to recover it. 00:22:24.207 [2024-05-15 01:09:36.419051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.419316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.419362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.207 qpair failed and we were unable to recover it. 00:22:24.207 [2024-05-15 01:09:36.419573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.419743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.419768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.207 qpair failed and we were unable to recover it. 00:22:24.207 [2024-05-15 01:09:36.419955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.420136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.420179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.207 qpair failed and we were unable to recover it. 00:22:24.207 [2024-05-15 01:09:36.420378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.420588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.420617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.207 qpair failed and we were unable to recover it. 00:22:24.207 [2024-05-15 01:09:36.420836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.421027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.421053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.207 qpair failed and we were unable to recover it. 00:22:24.207 [2024-05-15 01:09:36.421241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.421476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.421518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.207 qpair failed and we were unable to recover it. 00:22:24.207 [2024-05-15 01:09:36.421763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.421970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.421996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.207 qpair failed and we were unable to recover it. 00:22:24.207 [2024-05-15 01:09:36.422186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.422413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.422441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.207 qpair failed and we were unable to recover it. 00:22:24.207 [2024-05-15 01:09:36.422684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.422886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.422920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.207 qpair failed and we were unable to recover it. 00:22:24.207 [2024-05-15 01:09:36.423124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.423343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.423385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.207 qpair failed and we were unable to recover it. 00:22:24.207 [2024-05-15 01:09:36.423580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.423783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.423812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.207 qpair failed and we were unable to recover it. 00:22:24.207 [2024-05-15 01:09:36.424005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.424232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.424260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.207 qpair failed and we were unable to recover it. 00:22:24.207 [2024-05-15 01:09:36.424525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.424836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.424896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.207 qpair failed and we were unable to recover it. 00:22:24.207 [2024-05-15 01:09:36.425148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.207 [2024-05-15 01:09:36.425468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.425527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.208 qpair failed and we were unable to recover it. 00:22:24.208 [2024-05-15 01:09:36.425780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.425986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.426012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.208 qpair failed and we were unable to recover it. 00:22:24.208 [2024-05-15 01:09:36.426234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.426605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.426655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.208 qpair failed and we were unable to recover it. 00:22:24.208 [2024-05-15 01:09:36.426853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.427041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.427068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.208 qpair failed and we were unable to recover it. 00:22:24.208 [2024-05-15 01:09:36.427289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.427550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.427594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.208 qpair failed and we were unable to recover it. 00:22:24.208 [2024-05-15 01:09:36.427786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.427994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.428023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.208 qpair failed and we were unable to recover it. 00:22:24.208 [2024-05-15 01:09:36.428224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.428461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.428503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.208 qpair failed and we were unable to recover it. 00:22:24.208 [2024-05-15 01:09:36.428693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.428868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.428897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.208 qpair failed and we were unable to recover it. 00:22:24.208 [2024-05-15 01:09:36.429096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.429307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.429351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.208 qpair failed and we were unable to recover it. 00:22:24.208 [2024-05-15 01:09:36.429595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.429800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.429825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.208 qpair failed and we were unable to recover it. 00:22:24.208 [2024-05-15 01:09:36.430021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.430228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.430270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.208 qpair failed and we were unable to recover it. 00:22:24.208 [2024-05-15 01:09:36.430462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.430665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.430709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.208 qpair failed and we were unable to recover it. 00:22:24.208 [2024-05-15 01:09:36.430907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.431137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.431180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.208 qpair failed and we were unable to recover it. 00:22:24.208 [2024-05-15 01:09:36.431401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.431631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.431673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.208 qpair failed and we were unable to recover it. 00:22:24.208 [2024-05-15 01:09:36.431840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.432058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.432102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.208 qpair failed and we were unable to recover it. 00:22:24.208 [2024-05-15 01:09:36.432306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.432570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.432613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.208 qpair failed and we were unable to recover it. 00:22:24.208 [2024-05-15 01:09:36.432787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.432995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.433023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.208 qpair failed and we were unable to recover it. 00:22:24.208 [2024-05-15 01:09:36.433235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.433442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.433490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.208 qpair failed and we were unable to recover it. 00:22:24.208 [2024-05-15 01:09:36.433708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.433918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.433952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.208 qpair failed and we were unable to recover it. 00:22:24.208 [2024-05-15 01:09:36.434197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.434407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.434451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.208 qpair failed and we were unable to recover it. 00:22:24.208 [2024-05-15 01:09:36.434636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.434871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.434896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.208 qpair failed and we were unable to recover it. 00:22:24.208 [2024-05-15 01:09:36.435064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.435222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.435249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.208 qpair failed and we were unable to recover it. 00:22:24.208 [2024-05-15 01:09:36.435477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.435681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.435722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.208 qpair failed and we were unable to recover it. 00:22:24.208 [2024-05-15 01:09:36.435942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.436161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.436186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.208 qpair failed and we were unable to recover it. 00:22:24.208 [2024-05-15 01:09:36.436397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.436629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.436657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.208 qpair failed and we were unable to recover it. 00:22:24.208 [2024-05-15 01:09:36.436861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.437023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.437050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.208 qpair failed and we were unable to recover it. 00:22:24.208 [2024-05-15 01:09:36.437258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.437486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.437530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.208 qpair failed and we were unable to recover it. 00:22:24.208 [2024-05-15 01:09:36.437749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.437960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.437986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.208 qpair failed and we were unable to recover it. 00:22:24.208 [2024-05-15 01:09:36.438212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.438466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.438509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.208 qpair failed and we were unable to recover it. 00:22:24.208 [2024-05-15 01:09:36.438713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.438892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.208 [2024-05-15 01:09:36.438917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.209 qpair failed and we were unable to recover it. 00:22:24.209 [2024-05-15 01:09:36.439088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.439307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.439349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.209 qpair failed and we were unable to recover it. 00:22:24.209 [2024-05-15 01:09:36.439537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.439772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.439815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.209 qpair failed and we were unable to recover it. 00:22:24.209 [2024-05-15 01:09:36.439977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.440167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.440210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.209 qpair failed and we were unable to recover it. 00:22:24.209 [2024-05-15 01:09:36.440455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.440684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.440728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.209 qpair failed and we were unable to recover it. 00:22:24.209 [2024-05-15 01:09:36.440922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.441116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.441144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.209 qpair failed and we were unable to recover it. 00:22:24.209 [2024-05-15 01:09:36.441354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.441571] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:22:24.209 [2024-05-15 01:09:36.441591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.441639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.209 qpair failed and we were unable to recover it. 00:22:24.209 [2024-05-15 01:09:36.441656] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:24.209 [2024-05-15 01:09:36.441834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.442050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.442075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.209 qpair failed and we were unable to recover it. 00:22:24.209 [2024-05-15 01:09:36.442307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.442543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.442585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.209 qpair failed and we were unable to recover it. 00:22:24.209 [2024-05-15 01:09:36.442777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.443027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.443072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.209 qpair failed and we were unable to recover it. 00:22:24.209 [2024-05-15 01:09:36.443273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.443501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.443544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.209 qpair failed and we were unable to recover it. 00:22:24.209 [2024-05-15 01:09:36.443755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.443942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.443969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.209 qpair failed and we were unable to recover it. 00:22:24.209 [2024-05-15 01:09:36.444193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.444412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.444454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.209 qpair failed and we were unable to recover it. 00:22:24.209 [2024-05-15 01:09:36.444675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.444909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.444940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.209 qpair failed and we were unable to recover it. 00:22:24.209 [2024-05-15 01:09:36.445117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.445327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.445355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.209 qpair failed and we were unable to recover it. 00:22:24.209 [2024-05-15 01:09:36.445615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.445811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.445836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.209 qpair failed and we were unable to recover it. 00:22:24.209 [2024-05-15 01:09:36.446060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.446270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.446313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.209 qpair failed and we were unable to recover it. 00:22:24.209 [2024-05-15 01:09:36.446537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.446780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.446826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.209 qpair failed and we were unable to recover it. 00:22:24.209 [2024-05-15 01:09:36.447052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.447269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.447297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.209 qpair failed and we were unable to recover it. 00:22:24.209 [2024-05-15 01:09:36.447526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.447744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.447769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.209 qpair failed and we were unable to recover it. 00:22:24.209 [2024-05-15 01:09:36.447935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.448123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.448165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.209 qpair failed and we were unable to recover it. 00:22:24.209 [2024-05-15 01:09:36.448381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.448611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.448640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.209 qpair failed and we were unable to recover it. 00:22:24.209 [2024-05-15 01:09:36.448839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.449050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.449093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.209 qpair failed and we were unable to recover it. 00:22:24.209 [2024-05-15 01:09:36.449301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.449558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.449601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.209 qpair failed and we were unable to recover it. 00:22:24.209 [2024-05-15 01:09:36.449791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.450009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.450053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.209 qpair failed and we were unable to recover it. 00:22:24.209 [2024-05-15 01:09:36.450267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.450528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.450569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.209 qpair failed and we were unable to recover it. 00:22:24.209 [2024-05-15 01:09:36.450760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.450980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.451006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.209 qpair failed and we were unable to recover it. 00:22:24.209 [2024-05-15 01:09:36.451187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.451403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.451430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.209 qpair failed and we were unable to recover it. 00:22:24.209 [2024-05-15 01:09:36.451684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.451896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.209 [2024-05-15 01:09:36.451922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.209 qpair failed and we were unable to recover it. 00:22:24.209 [2024-05-15 01:09:36.452089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.452304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.452352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.210 qpair failed and we were unable to recover it. 00:22:24.210 [2024-05-15 01:09:36.452547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.452806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.452849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.210 qpair failed and we were unable to recover it. 00:22:24.210 [2024-05-15 01:09:36.453055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.453299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.453352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.210 qpair failed and we were unable to recover it. 00:22:24.210 [2024-05-15 01:09:36.453540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.453800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.453843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.210 qpair failed and we were unable to recover it. 00:22:24.210 [2024-05-15 01:09:36.454055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.454379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.454422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.210 qpair failed and we were unable to recover it. 00:22:24.210 [2024-05-15 01:09:36.454647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.454831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.454855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.210 qpair failed and we were unable to recover it. 00:22:24.210 [2024-05-15 01:09:36.455041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.455240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.455283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.210 qpair failed and we were unable to recover it. 00:22:24.210 [2024-05-15 01:09:36.455555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.455788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.455815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.210 qpair failed and we were unable to recover it. 00:22:24.210 [2024-05-15 01:09:36.456022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.456261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.456302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.210 qpair failed and we were unable to recover it. 00:22:24.210 [2024-05-15 01:09:36.456561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.456772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.456815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.210 qpair failed and we were unable to recover it. 00:22:24.210 [2024-05-15 01:09:36.457032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.457250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.457293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.210 qpair failed and we were unable to recover it. 00:22:24.210 [2024-05-15 01:09:36.457536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.457738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.457764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.210 qpair failed and we were unable to recover it. 00:22:24.210 [2024-05-15 01:09:36.457993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.458181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.458228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.210 qpair failed and we were unable to recover it. 00:22:24.210 [2024-05-15 01:09:36.458424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.458640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.458684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.210 qpair failed and we were unable to recover it. 00:22:24.210 [2024-05-15 01:09:36.458877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.459090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.459134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.210 qpair failed and we were unable to recover it. 00:22:24.210 [2024-05-15 01:09:36.459352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.459612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.459654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.210 qpair failed and we were unable to recover it. 00:22:24.210 [2024-05-15 01:09:36.459873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.460046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.460072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.210 qpair failed and we were unable to recover it. 00:22:24.210 [2024-05-15 01:09:36.460292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.460555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.460597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.210 qpair failed and we were unable to recover it. 00:22:24.210 [2024-05-15 01:09:36.460857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.461072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.461097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.210 qpair failed and we were unable to recover it. 00:22:24.210 [2024-05-15 01:09:36.461351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.461559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.461600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.210 qpair failed and we were unable to recover it. 00:22:24.210 [2024-05-15 01:09:36.461787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.461995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.462023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.210 qpair failed and we were unable to recover it. 00:22:24.210 [2024-05-15 01:09:36.462228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.462464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.462492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.210 qpair failed and we were unable to recover it. 00:22:24.210 [2024-05-15 01:09:36.462698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.462885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.462910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.210 qpair failed and we were unable to recover it. 00:22:24.210 [2024-05-15 01:09:36.463110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.463347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.210 [2024-05-15 01:09:36.463389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.210 qpair failed and we were unable to recover it. 00:22:24.210 [2024-05-15 01:09:36.463571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.463803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.463828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.211 qpair failed and we were unable to recover it. 00:22:24.211 [2024-05-15 01:09:36.464034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.464269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.464296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.211 qpair failed and we were unable to recover it. 00:22:24.211 [2024-05-15 01:09:36.464529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.464765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.464790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.211 qpair failed and we were unable to recover it. 00:22:24.211 [2024-05-15 01:09:36.464948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.465194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.465236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.211 qpair failed and we were unable to recover it. 00:22:24.211 [2024-05-15 01:09:36.465424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.465656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.465698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.211 qpair failed and we were unable to recover it. 00:22:24.211 [2024-05-15 01:09:36.465888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.466119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.466163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.211 qpair failed and we were unable to recover it. 00:22:24.211 [2024-05-15 01:09:36.466413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.466642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.466685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.211 qpair failed and we were unable to recover it. 00:22:24.211 [2024-05-15 01:09:36.466844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.467062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.467105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.211 qpair failed and we were unable to recover it. 00:22:24.211 [2024-05-15 01:09:36.467291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.467529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.467575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.211 qpair failed and we were unable to recover it. 00:22:24.211 [2024-05-15 01:09:36.467792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.467999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.468043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.211 qpair failed and we were unable to recover it. 00:22:24.211 [2024-05-15 01:09:36.468262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.468522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.468569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.211 qpair failed and we were unable to recover it. 00:22:24.211 [2024-05-15 01:09:36.468736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.468966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.468993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.211 qpair failed and we were unable to recover it. 00:22:24.211 [2024-05-15 01:09:36.469216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.469474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.469520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.211 qpair failed and we were unable to recover it. 00:22:24.211 [2024-05-15 01:09:36.469752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.469971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.469997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.211 qpair failed and we were unable to recover it. 00:22:24.211 [2024-05-15 01:09:36.470235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.470513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.470556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.211 qpair failed and we were unable to recover it. 00:22:24.211 [2024-05-15 01:09:36.470735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.470920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.470953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.211 qpair failed and we were unable to recover it. 00:22:24.211 [2024-05-15 01:09:36.471144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.471353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.471406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.211 qpair failed and we were unable to recover it. 00:22:24.211 [2024-05-15 01:09:36.471604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.471839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.471881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.211 qpair failed and we were unable to recover it. 00:22:24.211 [2024-05-15 01:09:36.472111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.472331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.472373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.211 qpair failed and we were unable to recover it. 00:22:24.211 [2024-05-15 01:09:36.472554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.472780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.472823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.211 qpair failed and we were unable to recover it. 00:22:24.211 [2024-05-15 01:09:36.473013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.473221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.473264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.211 qpair failed and we were unable to recover it. 00:22:24.211 [2024-05-15 01:09:36.473438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.473671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.473698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.211 qpair failed and we were unable to recover it. 00:22:24.211 [2024-05-15 01:09:36.473890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.474081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.474106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.211 qpair failed and we were unable to recover it. 00:22:24.211 [2024-05-15 01:09:36.474295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.474469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.474496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.211 qpair failed and we were unable to recover it. 00:22:24.211 [2024-05-15 01:09:36.474706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.474917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.474956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.211 qpair failed and we were unable to recover it. 00:22:24.211 [2024-05-15 01:09:36.475127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.475325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.475368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.211 qpair failed and we were unable to recover it. 00:22:24.211 [2024-05-15 01:09:36.475554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.475761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.475787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.211 qpair failed and we were unable to recover it. 00:22:24.211 [2024-05-15 01:09:36.476011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.476260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.476302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.211 qpair failed and we were unable to recover it. 00:22:24.211 [2024-05-15 01:09:36.476558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.476756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.211 [2024-05-15 01:09:36.476781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.211 qpair failed and we were unable to recover it. 00:22:24.211 [2024-05-15 01:09:36.477005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.477206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.477248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.212 qpair failed and we were unable to recover it. 00:22:24.212 [2024-05-15 01:09:36.477432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.477653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.477700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.212 qpair failed and we were unable to recover it. 00:22:24.212 [2024-05-15 01:09:36.477886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.478111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.478154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.212 qpair failed and we were unable to recover it. 00:22:24.212 [2024-05-15 01:09:36.478350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.478580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.478623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.212 qpair failed and we were unable to recover it. 00:22:24.212 [2024-05-15 01:09:36.478816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.479070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.479114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.212 qpair failed and we were unable to recover it. 00:22:24.212 [2024-05-15 01:09:36.479343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.479602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.479648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.212 qpair failed and we were unable to recover it. 00:22:24.212 [2024-05-15 01:09:36.479838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.480058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.480108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.212 qpair failed and we were unable to recover it. 00:22:24.212 [2024-05-15 01:09:36.480320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.480553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.480600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.212 qpair failed and we were unable to recover it. 00:22:24.212 [2024-05-15 01:09:36.480767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.480940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.480966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.212 qpair failed and we were unable to recover it. 00:22:24.212 [2024-05-15 01:09:36.481161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.481457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.481500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.212 qpair failed and we were unable to recover it. 00:22:24.212 [2024-05-15 01:09:36.481748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.481934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.481959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.212 qpair failed and we were unable to recover it. 00:22:24.212 EAL: No free 2048 kB hugepages reported on node 1 00:22:24.212 [2024-05-15 01:09:36.482124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.482321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.482349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.212 qpair failed and we were unable to recover it. 00:22:24.212 [2024-05-15 01:09:36.482577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.482787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.482835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.212 qpair failed and we were unable to recover it. 00:22:24.212 [2024-05-15 01:09:36.482997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.483213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.483255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.212 qpair failed and we were unable to recover it. 00:22:24.212 [2024-05-15 01:09:36.483462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.483662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.483703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.212 qpair failed and we were unable to recover it. 00:22:24.212 [2024-05-15 01:09:36.483862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.484026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.484053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.212 qpair failed and we were unable to recover it. 00:22:24.212 [2024-05-15 01:09:36.484258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.484487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.484529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.212 qpair failed and we were unable to recover it. 00:22:24.212 [2024-05-15 01:09:36.484690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.484913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.484944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.212 qpair failed and we were unable to recover it. 00:22:24.212 [2024-05-15 01:09:36.485138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.485366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.485411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.212 qpair failed and we were unable to recover it. 00:22:24.212 [2024-05-15 01:09:36.485611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.485787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.485812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.212 qpair failed and we were unable to recover it. 00:22:24.212 [2024-05-15 01:09:36.486023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.486184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.486209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.212 qpair failed and we were unable to recover it. 00:22:24.212 [2024-05-15 01:09:36.486365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.486556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.486581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.212 qpair failed and we were unable to recover it. 00:22:24.212 [2024-05-15 01:09:36.486769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.486949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.486974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.212 qpair failed and we were unable to recover it. 00:22:24.212 [2024-05-15 01:09:36.487139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.487333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.487357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.212 qpair failed and we were unable to recover it. 00:22:24.212 [2024-05-15 01:09:36.487574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.487732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.487757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.212 qpair failed and we were unable to recover it. 00:22:24.212 [2024-05-15 01:09:36.487946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.488137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.488162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.212 qpair failed and we were unable to recover it. 00:22:24.212 [2024-05-15 01:09:36.488355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.488516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.488545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.212 qpair failed and we were unable to recover it. 00:22:24.212 [2024-05-15 01:09:36.488734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.488926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.488956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.212 qpair failed and we were unable to recover it. 00:22:24.212 [2024-05-15 01:09:36.489129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.489316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.212 [2024-05-15 01:09:36.489340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.212 qpair failed and we were unable to recover it. 00:22:24.212 [2024-05-15 01:09:36.489554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.489738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.489762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.213 qpair failed and we were unable to recover it. 00:22:24.213 [2024-05-15 01:09:36.489913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.490077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.490104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.213 qpair failed and we were unable to recover it. 00:22:24.213 [2024-05-15 01:09:36.490292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.490455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.490480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.213 qpair failed and we were unable to recover it. 00:22:24.213 [2024-05-15 01:09:36.490672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.490822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.490847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.213 qpair failed and we were unable to recover it. 00:22:24.213 [2024-05-15 01:09:36.491044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.491240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.491266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.213 qpair failed and we were unable to recover it. 00:22:24.213 [2024-05-15 01:09:36.491448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.491610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.491635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.213 qpair failed and we were unable to recover it. 00:22:24.213 [2024-05-15 01:09:36.491802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.491965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.491991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.213 qpair failed and we were unable to recover it. 00:22:24.213 [2024-05-15 01:09:36.492151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.492338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.492367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.213 qpair failed and we were unable to recover it. 00:22:24.213 [2024-05-15 01:09:36.492524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.492708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.492733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.213 qpair failed and we were unable to recover it. 00:22:24.213 [2024-05-15 01:09:36.492894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.493100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.493126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.213 qpair failed and we were unable to recover it. 00:22:24.213 [2024-05-15 01:09:36.493286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.493498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.493523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.213 qpair failed and we were unable to recover it. 00:22:24.213 [2024-05-15 01:09:36.493705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.493898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.493923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.213 qpair failed and we were unable to recover it. 00:22:24.213 [2024-05-15 01:09:36.494087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.494271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.494295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.213 qpair failed and we were unable to recover it. 00:22:24.213 [2024-05-15 01:09:36.494510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.494699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.494724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.213 qpair failed and we were unable to recover it. 00:22:24.213 [2024-05-15 01:09:36.494915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.495085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.495112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.213 qpair failed and we were unable to recover it. 00:22:24.213 [2024-05-15 01:09:36.495305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.495507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.495532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.213 qpair failed and we were unable to recover it. 00:22:24.213 [2024-05-15 01:09:36.495722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.495916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.495959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.213 qpair failed and we were unable to recover it. 00:22:24.213 [2024-05-15 01:09:36.496156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.496360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.496390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.213 qpair failed and we were unable to recover it. 00:22:24.213 [2024-05-15 01:09:36.496583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.496746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.496771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.213 qpair failed and we were unable to recover it. 00:22:24.213 [2024-05-15 01:09:36.496964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.497160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.497185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.213 qpair failed and we were unable to recover it. 00:22:24.213 [2024-05-15 01:09:36.497377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.497534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.497559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.213 qpair failed and we were unable to recover it. 00:22:24.213 [2024-05-15 01:09:36.497745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.497935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.497961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.213 qpair failed and we were unable to recover it. 00:22:24.213 [2024-05-15 01:09:36.498126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.498285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.498310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.213 qpair failed and we were unable to recover it. 00:22:24.213 [2024-05-15 01:09:36.498466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.498648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.498674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.213 qpair failed and we were unable to recover it. 00:22:24.213 [2024-05-15 01:09:36.498863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.499063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.499089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.213 qpair failed and we were unable to recover it. 00:22:24.213 [2024-05-15 01:09:36.499248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.499437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.499461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.213 qpair failed and we were unable to recover it. 00:22:24.213 [2024-05-15 01:09:36.499615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.499800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.499825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.213 qpair failed and we were unable to recover it. 00:22:24.213 [2024-05-15 01:09:36.500018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.500205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.500234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.213 qpair failed and we were unable to recover it. 00:22:24.213 [2024-05-15 01:09:36.500419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.500633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.500658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.213 qpair failed and we were unable to recover it. 00:22:24.213 [2024-05-15 01:09:36.500875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.213 [2024-05-15 01:09:36.501073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.501099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.214 qpair failed and we were unable to recover it. 00:22:24.214 [2024-05-15 01:09:36.501287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.501489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.501512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.214 qpair failed and we were unable to recover it. 00:22:24.214 [2024-05-15 01:09:36.501738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.501936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.501962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.214 qpair failed and we were unable to recover it. 00:22:24.214 [2024-05-15 01:09:36.502119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.502337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.502362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.214 qpair failed and we were unable to recover it. 00:22:24.214 [2024-05-15 01:09:36.502546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.502766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.502791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.214 qpair failed and we were unable to recover it. 00:22:24.214 [2024-05-15 01:09:36.502974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.503163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.503187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.214 qpair failed and we were unable to recover it. 00:22:24.214 [2024-05-15 01:09:36.503376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.503557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.503582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.214 qpair failed and we were unable to recover it. 00:22:24.214 [2024-05-15 01:09:36.503798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.503991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.504019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.214 qpair failed and we were unable to recover it. 00:22:24.214 [2024-05-15 01:09:36.504181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.504389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.504414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.214 qpair failed and we were unable to recover it. 00:22:24.214 [2024-05-15 01:09:36.504612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.504830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.504855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.214 qpair failed and we were unable to recover it. 00:22:24.214 [2024-05-15 01:09:36.505044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.505210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.505237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.214 qpair failed and we were unable to recover it. 00:22:24.214 [2024-05-15 01:09:36.505458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.505622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.505646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.214 qpair failed and we were unable to recover it. 00:22:24.214 [2024-05-15 01:09:36.505835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.506025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.506051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.214 qpair failed and we were unable to recover it. 00:22:24.214 [2024-05-15 01:09:36.506242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.506432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.506458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.214 qpair failed and we were unable to recover it. 00:22:24.214 [2024-05-15 01:09:36.506641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.506813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.506838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.214 qpair failed and we were unable to recover it. 00:22:24.214 [2024-05-15 01:09:36.507036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.507224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.507265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.214 qpair failed and we were unable to recover it. 00:22:24.214 [2024-05-15 01:09:36.507471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.507691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.507716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.214 qpair failed and we were unable to recover it. 00:22:24.214 [2024-05-15 01:09:36.507937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.508126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.508151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.214 qpair failed and we were unable to recover it. 00:22:24.214 [2024-05-15 01:09:36.508332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.508518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.508544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.214 qpair failed and we were unable to recover it. 00:22:24.214 [2024-05-15 01:09:36.508740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.508955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.508981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.214 qpair failed and we were unable to recover it. 00:22:24.214 [2024-05-15 01:09:36.509149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.509336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.509361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.214 qpair failed and we were unable to recover it. 00:22:24.214 [2024-05-15 01:09:36.509619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.509810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.509835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.214 qpair failed and we were unable to recover it. 00:22:24.214 [2024-05-15 01:09:36.509995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.510189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.510214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.214 qpair failed and we were unable to recover it. 00:22:24.214 [2024-05-15 01:09:36.510399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.510569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.510595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.214 qpair failed and we were unable to recover it. 00:22:24.214 [2024-05-15 01:09:36.510777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.510969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.510995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.214 qpair failed and we were unable to recover it. 00:22:24.214 [2024-05-15 01:09:36.511180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.511399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.511423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.214 qpair failed and we were unable to recover it. 00:22:24.214 [2024-05-15 01:09:36.511577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.511790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.214 [2024-05-15 01:09:36.511815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.214 qpair failed and we were unable to recover it. 00:22:24.214 [2024-05-15 01:09:36.512026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.512216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.512240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.215 qpair failed and we were unable to recover it. 00:22:24.215 [2024-05-15 01:09:36.512519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.512689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.512730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.215 qpair failed and we were unable to recover it. 00:22:24.215 [2024-05-15 01:09:36.512945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.513133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.513159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.215 qpair failed and we were unable to recover it. 00:22:24.215 [2024-05-15 01:09:36.513435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.513624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.513649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.215 qpair failed and we were unable to recover it. 00:22:24.215 [2024-05-15 01:09:36.513863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.514056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.514083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.215 qpair failed and we were unable to recover it. 00:22:24.215 [2024-05-15 01:09:36.514276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.514485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.514510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.215 qpair failed and we were unable to recover it. 00:22:24.215 [2024-05-15 01:09:36.514715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.514879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.514905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.215 qpair failed and we were unable to recover it. 00:22:24.215 [2024-05-15 01:09:36.515106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.515296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.515322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.215 qpair failed and we were unable to recover it. 00:22:24.215 [2024-05-15 01:09:36.515516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.515699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.515724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.215 qpair failed and we were unable to recover it. 00:22:24.215 [2024-05-15 01:09:36.515903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.516099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.516125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.215 qpair failed and we were unable to recover it. 00:22:24.215 [2024-05-15 01:09:36.516349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.516540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.516566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.215 qpair failed and we were unable to recover it. 00:22:24.215 [2024-05-15 01:09:36.516732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.516950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.516976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.215 qpair failed and we were unable to recover it. 00:22:24.215 [2024-05-15 01:09:36.517174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.517333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.517357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.215 qpair failed and we were unable to recover it. 00:22:24.215 [2024-05-15 01:09:36.517542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.517834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.517874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.215 qpair failed and we were unable to recover it. 00:22:24.215 [2024-05-15 01:09:36.518098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.518287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.518312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.215 qpair failed and we were unable to recover it. 00:22:24.215 [2024-05-15 01:09:36.518470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.518624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.518649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.215 qpair failed and we were unable to recover it. 00:22:24.215 [2024-05-15 01:09:36.518841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.519027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.519054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.215 qpair failed and we were unable to recover it. 00:22:24.215 [2024-05-15 01:09:36.519245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.519457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.519482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.215 qpair failed and we were unable to recover it. 00:22:24.215 [2024-05-15 01:09:36.519652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.519831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.519856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.215 qpair failed and we were unable to recover it. 00:22:24.215 [2024-05-15 01:09:36.520098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.520275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.520299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.215 qpair failed and we were unable to recover it. 00:22:24.215 [2024-05-15 01:09:36.520455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.520655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.520682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.215 qpair failed and we were unable to recover it. 00:22:24.215 [2024-05-15 01:09:36.520871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.521067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.521093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.215 qpair failed and we were unable to recover it. 00:22:24.215 [2024-05-15 01:09:36.521292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.521476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.521501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.215 qpair failed and we were unable to recover it. 00:22:24.215 [2024-05-15 01:09:36.521677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.521859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.521884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.215 qpair failed and we were unable to recover it. 00:22:24.215 [2024-05-15 01:09:36.522072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.215 [2024-05-15 01:09:36.522254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.522279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.216 qpair failed and we were unable to recover it. 00:22:24.216 [2024-05-15 01:09:36.522462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.522680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.522682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:24.216 [2024-05-15 01:09:36.522706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.216 qpair failed and we were unable to recover it. 00:22:24.216 [2024-05-15 01:09:36.522872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.523048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.523073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.216 qpair failed and we were unable to recover it. 00:22:24.216 [2024-05-15 01:09:36.523279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.523450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.523475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.216 qpair failed and we were unable to recover it. 00:22:24.216 [2024-05-15 01:09:36.523633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.523817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.523843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.216 qpair failed and we were unable to recover it. 00:22:24.216 [2024-05-15 01:09:36.524105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.524429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.524454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.216 qpair failed and we were unable to recover it. 00:22:24.216 [2024-05-15 01:09:36.524654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.524825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.524852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.216 qpair failed and we were unable to recover it. 00:22:24.216 [2024-05-15 01:09:36.525069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.525282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.525307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.216 qpair failed and we were unable to recover it. 00:22:24.216 [2024-05-15 01:09:36.525484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.525703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.525728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.216 qpair failed and we were unable to recover it. 00:22:24.216 [2024-05-15 01:09:36.525892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.526097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.526125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.216 qpair failed and we were unable to recover it. 00:22:24.216 [2024-05-15 01:09:36.526318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.526483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.526509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.216 qpair failed and we were unable to recover it. 00:22:24.216 [2024-05-15 01:09:36.526701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.526858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.526898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.216 qpair failed and we were unable to recover it. 00:22:24.216 [2024-05-15 01:09:36.527120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.527314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.527341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.216 qpair failed and we were unable to recover it. 00:22:24.216 [2024-05-15 01:09:36.527507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.527731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.527756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.216 qpair failed and we were unable to recover it. 00:22:24.216 [2024-05-15 01:09:36.527947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.528143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.528169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.216 qpair failed and we were unable to recover it. 00:22:24.216 [2024-05-15 01:09:36.528349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.528510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.528536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.216 qpair failed and we were unable to recover it. 00:22:24.216 [2024-05-15 01:09:36.528693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.528860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.528886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.216 qpair failed and we were unable to recover it. 00:22:24.216 [2024-05-15 01:09:36.529084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.529293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.529319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.216 qpair failed and we were unable to recover it. 00:22:24.216 [2024-05-15 01:09:36.529515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.529710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.529736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.216 qpair failed and we were unable to recover it. 00:22:24.216 [2024-05-15 01:09:36.529900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.530077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.530103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.216 qpair failed and we were unable to recover it. 00:22:24.216 [2024-05-15 01:09:36.530264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.530479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.530504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.216 qpair failed and we were unable to recover it. 00:22:24.216 [2024-05-15 01:09:36.530668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.530890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.530916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.216 qpair failed and we were unable to recover it. 00:22:24.216 [2024-05-15 01:09:36.531121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.531282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.531307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.216 qpair failed and we were unable to recover it. 00:22:24.216 [2024-05-15 01:09:36.531472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.531634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.531660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.216 qpair failed and we were unable to recover it. 00:22:24.216 [2024-05-15 01:09:36.531850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.532071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.532098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.216 qpair failed and we were unable to recover it. 00:22:24.216 [2024-05-15 01:09:36.532266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.532430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.532456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.216 qpair failed and we were unable to recover it. 00:22:24.216 [2024-05-15 01:09:36.532647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.532810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.532850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.216 qpair failed and we were unable to recover it. 00:22:24.216 [2024-05-15 01:09:36.533073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.533261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.533288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.216 qpair failed and we were unable to recover it. 00:22:24.216 [2024-05-15 01:09:36.533513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.533729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.533754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.216 qpair failed and we were unable to recover it. 00:22:24.216 [2024-05-15 01:09:36.533945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.534111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.216 [2024-05-15 01:09:36.534137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.216 qpair failed and we were unable to recover it. 00:22:24.217 [2024-05-15 01:09:36.534302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.534466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.534492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.217 qpair failed and we were unable to recover it. 00:22:24.217 [2024-05-15 01:09:36.534684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.534848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.534874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.217 qpair failed and we were unable to recover it. 00:22:24.217 [2024-05-15 01:09:36.535089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.535253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.535278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.217 qpair failed and we were unable to recover it. 00:22:24.217 [2024-05-15 01:09:36.535469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.535629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.535654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.217 qpair failed and we were unable to recover it. 00:22:24.217 [2024-05-15 01:09:36.535846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.536037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.536063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.217 qpair failed and we were unable to recover it. 00:22:24.217 [2024-05-15 01:09:36.536368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.536616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.536641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.217 qpair failed and we were unable to recover it. 00:22:24.217 [2024-05-15 01:09:36.536825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.537047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.537074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.217 qpair failed and we were unable to recover it. 00:22:24.217 [2024-05-15 01:09:36.537241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.537459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.537484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.217 qpair failed and we were unable to recover it. 00:22:24.217 [2024-05-15 01:09:36.537680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.537869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.537895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.217 qpair failed and we were unable to recover it. 00:22:24.217 [2024-05-15 01:09:36.538072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.538241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.538267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.217 qpair failed and we were unable to recover it. 00:22:24.217 [2024-05-15 01:09:36.538450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.538631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.538656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.217 qpair failed and we were unable to recover it. 00:22:24.217 [2024-05-15 01:09:36.538936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.539147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.539173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.217 qpair failed and we were unable to recover it. 00:22:24.217 [2024-05-15 01:09:36.539382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.539547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.539572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.217 qpair failed and we were unable to recover it. 00:22:24.217 [2024-05-15 01:09:36.539774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.539966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.539992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.217 qpair failed and we were unable to recover it. 00:22:24.217 [2024-05-15 01:09:36.540148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.540304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.540329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.217 qpair failed and we were unable to recover it. 00:22:24.217 [2024-05-15 01:09:36.540514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.540737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.540761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.217 qpair failed and we were unable to recover it. 00:22:24.217 [2024-05-15 01:09:36.540983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.541172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.541198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.217 qpair failed and we were unable to recover it. 00:22:24.217 [2024-05-15 01:09:36.541363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.541549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.541573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.217 qpair failed and we were unable to recover it. 00:22:24.217 [2024-05-15 01:09:36.541816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.541987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.542013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.217 qpair failed and we were unable to recover it. 00:22:24.217 [2024-05-15 01:09:36.542238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.542407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.542432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.217 qpair failed and we were unable to recover it. 00:22:24.217 [2024-05-15 01:09:36.542652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.542881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.542906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.217 qpair failed and we were unable to recover it. 00:22:24.217 [2024-05-15 01:09:36.543079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.543268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.543293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.217 qpair failed and we were unable to recover it. 00:22:24.217 [2024-05-15 01:09:36.543481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.543660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.543684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.217 qpair failed and we were unable to recover it. 00:22:24.217 [2024-05-15 01:09:36.543867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.544111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.544138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.217 qpair failed and we were unable to recover it. 00:22:24.217 [2024-05-15 01:09:36.544307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.544473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.544498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.217 qpair failed and we were unable to recover it. 00:22:24.217 [2024-05-15 01:09:36.544667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.544857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.544884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.217 qpair failed and we were unable to recover it. 00:22:24.217 [2024-05-15 01:09:36.545050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.545248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.545274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.217 qpair failed and we were unable to recover it. 00:22:24.217 [2024-05-15 01:09:36.545429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.545631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.545658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.217 qpair failed and we were unable to recover it. 00:22:24.217 [2024-05-15 01:09:36.545854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.546682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.546710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.217 qpair failed and we were unable to recover it. 00:22:24.217 [2024-05-15 01:09:36.546944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.217 [2024-05-15 01:09:36.547118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.547144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.218 qpair failed and we were unable to recover it. 00:22:24.218 [2024-05-15 01:09:36.547321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.547490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.547516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.218 qpair failed and we were unable to recover it. 00:22:24.218 [2024-05-15 01:09:36.547709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.547895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.547927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.218 qpair failed and we were unable to recover it. 00:22:24.218 [2024-05-15 01:09:36.548137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.548305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.548330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.218 qpair failed and we were unable to recover it. 00:22:24.218 [2024-05-15 01:09:36.548530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.548701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.548728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.218 qpair failed and we were unable to recover it. 00:22:24.218 [2024-05-15 01:09:36.548944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.549138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.549163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.218 qpair failed and we were unable to recover it. 00:22:24.218 [2024-05-15 01:09:36.549358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.549550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.549575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.218 qpair failed and we were unable to recover it. 00:22:24.218 [2024-05-15 01:09:36.549737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.549950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.549977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.218 qpair failed and we were unable to recover it. 00:22:24.218 [2024-05-15 01:09:36.550149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.550345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.550371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.218 qpair failed and we were unable to recover it. 00:22:24.218 [2024-05-15 01:09:36.550564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.550751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.550778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.218 qpair failed and we were unable to recover it. 00:22:24.218 [2024-05-15 01:09:36.550942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.551107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.551132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.218 qpair failed and we were unable to recover it. 00:22:24.218 [2024-05-15 01:09:36.551325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.551511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.551555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.218 qpair failed and we were unable to recover it. 00:22:24.218 [2024-05-15 01:09:36.551722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.551888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.551914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.218 qpair failed and we were unable to recover it. 00:22:24.218 [2024-05-15 01:09:36.552096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.552262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.552287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.218 qpair failed and we were unable to recover it. 00:22:24.218 [2024-05-15 01:09:36.552447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.552614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.552641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.218 qpair failed and we were unable to recover it. 00:22:24.218 [2024-05-15 01:09:36.552828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.553003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.553031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.218 qpair failed and we were unable to recover it. 00:22:24.218 [2024-05-15 01:09:36.553256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.553424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.553450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.218 qpair failed and we were unable to recover it. 00:22:24.218 [2024-05-15 01:09:36.553614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.553806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.553836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.218 qpair failed and we were unable to recover it. 00:22:24.218 [2024-05-15 01:09:36.554021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.554178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.554205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.218 qpair failed and we were unable to recover it. 00:22:24.218 [2024-05-15 01:09:36.554398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.554567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.554593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.218 qpair failed and we were unable to recover it. 00:22:24.218 [2024-05-15 01:09:36.554757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.554944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.554970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.218 qpair failed and we were unable to recover it. 00:22:24.218 [2024-05-15 01:09:36.555142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.555355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.555380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.218 qpair failed and we were unable to recover it. 00:22:24.218 [2024-05-15 01:09:36.555541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.555736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.555762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.218 qpair failed and we were unable to recover it. 00:22:24.218 [2024-05-15 01:09:36.555926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.556107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.556133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.218 qpair failed and we were unable to recover it. 00:22:24.218 [2024-05-15 01:09:36.556291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.556476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.556501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.218 qpair failed and we were unable to recover it. 00:22:24.218 [2024-05-15 01:09:36.556704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.556901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.556926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.218 qpair failed and we were unable to recover it. 00:22:24.218 [2024-05-15 01:09:36.557107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.557268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.218 [2024-05-15 01:09:36.557295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.219 qpair failed and we were unable to recover it. 00:22:24.219 [2024-05-15 01:09:36.557490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.557709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.557734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.219 qpair failed and we were unable to recover it. 00:22:24.219 [2024-05-15 01:09:36.557902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.558101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.558130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.219 qpair failed and we were unable to recover it. 00:22:24.219 [2024-05-15 01:09:36.558287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.558451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.558476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.219 qpair failed and we were unable to recover it. 00:22:24.219 [2024-05-15 01:09:36.558672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.558885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.558910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.219 qpair failed and we were unable to recover it. 00:22:24.219 [2024-05-15 01:09:36.559101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.559313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.559342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.219 qpair failed and we were unable to recover it. 00:22:24.219 [2024-05-15 01:09:36.559526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.559691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.559717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.219 qpair failed and we were unable to recover it. 00:22:24.219 [2024-05-15 01:09:36.559970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.560163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.560189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.219 qpair failed and we were unable to recover it. 00:22:24.219 [2024-05-15 01:09:36.560459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.560614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.560639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.219 qpair failed and we were unable to recover it. 00:22:24.219 [2024-05-15 01:09:36.560805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.561012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.561040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.219 qpair failed and we were unable to recover it. 00:22:24.219 [2024-05-15 01:09:36.561247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.561476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.561502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.219 qpair failed and we were unable to recover it. 00:22:24.219 [2024-05-15 01:09:36.561706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.561941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.561967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.219 qpair failed and we were unable to recover it. 00:22:24.219 [2024-05-15 01:09:36.562155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.562370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.562396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.219 qpair failed and we were unable to recover it. 00:22:24.219 [2024-05-15 01:09:36.562565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.562795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.562821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.219 qpair failed and we were unable to recover it. 00:22:24.219 [2024-05-15 01:09:36.563015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.563204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.563237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.219 qpair failed and we were unable to recover it. 00:22:24.219 [2024-05-15 01:09:36.563429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.563609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.563635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.219 qpair failed and we were unable to recover it. 00:22:24.219 [2024-05-15 01:09:36.563823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.564009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.564035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.219 qpair failed and we were unable to recover it. 00:22:24.219 [2024-05-15 01:09:36.564248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.564454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.564479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.219 qpair failed and we were unable to recover it. 00:22:24.219 [2024-05-15 01:09:36.564658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.564856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.564882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.219 qpair failed and we were unable to recover it. 00:22:24.219 [2024-05-15 01:09:36.565055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.565271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.565296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.219 qpair failed and we were unable to recover it. 00:22:24.219 [2024-05-15 01:09:36.565494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.565685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.565710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.219 qpair failed and we were unable to recover it. 00:22:24.219 [2024-05-15 01:09:36.565914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.566099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.219 [2024-05-15 01:09:36.566124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.220 qpair failed and we were unable to recover it. 00:22:24.220 [2024-05-15 01:09:36.566335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.220 [2024-05-15 01:09:36.566537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.220 [2024-05-15 01:09:36.566563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.220 qpair failed and we were unable to recover it. 00:22:24.220 [2024-05-15 01:09:36.566780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.220 [2024-05-15 01:09:36.566970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.220 [2024-05-15 01:09:36.567000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.220 qpair failed and we were unable to recover it. 00:22:24.220 [2024-05-15 01:09:36.567192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.220 [2024-05-15 01:09:36.567385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.220 [2024-05-15 01:09:36.567411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.220 qpair failed and we were unable to recover it. 00:22:24.220 [2024-05-15 01:09:36.567565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.220 [2024-05-15 01:09:36.567754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.220 [2024-05-15 01:09:36.567779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.220 qpair failed and we were unable to recover it. 00:22:24.220 [2024-05-15 01:09:36.567973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.220 [2024-05-15 01:09:36.568135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.220 [2024-05-15 01:09:36.568160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.220 qpair failed and we were unable to recover it. 00:22:24.220 [2024-05-15 01:09:36.568356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.220 [2024-05-15 01:09:36.568523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.220 [2024-05-15 01:09:36.568548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.220 qpair failed and we were unable to recover it. 00:22:24.220 [2024-05-15 01:09:36.568735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.220 [2024-05-15 01:09:36.568910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.220 [2024-05-15 01:09:36.568953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.220 qpair failed and we were unable to recover it. 00:22:24.220 [2024-05-15 01:09:36.569142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.220 [2024-05-15 01:09:36.569331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.220 [2024-05-15 01:09:36.569357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.220 qpair failed and we were unable to recover it. 00:22:24.220 [2024-05-15 01:09:36.569567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.220 [2024-05-15 01:09:36.569724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.220 [2024-05-15 01:09:36.569750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.220 qpair failed and we were unable to recover it. 00:22:24.220 [2024-05-15 01:09:36.569938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.220 [2024-05-15 01:09:36.570101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.220 [2024-05-15 01:09:36.570127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.220 qpair failed and we were unable to recover it. 00:22:24.220 [2024-05-15 01:09:36.570299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.220 [2024-05-15 01:09:36.570487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.220 [2024-05-15 01:09:36.570512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.220 qpair failed and we were unable to recover it. 00:22:24.220 [2024-05-15 01:09:36.570729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.220 [2024-05-15 01:09:36.570892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.220 [2024-05-15 01:09:36.570921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.220 qpair failed and we were unable to recover it. 00:22:24.220 [2024-05-15 01:09:36.571124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.220 [2024-05-15 01:09:36.571316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.220 [2024-05-15 01:09:36.571341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.220 qpair failed and we were unable to recover it. 00:22:24.220 [2024-05-15 01:09:36.571517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.220 [2024-05-15 01:09:36.571715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.220 [2024-05-15 01:09:36.571741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.220 qpair failed and we were unable to recover it. 00:22:24.220 [2024-05-15 01:09:36.571969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.220 [2024-05-15 01:09:36.572158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.220 [2024-05-15 01:09:36.572185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.220 qpair failed and we were unable to recover it. 00:22:24.220 [2024-05-15 01:09:36.572364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.220 [2024-05-15 01:09:36.572525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.221 [2024-05-15 01:09:36.572565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.221 qpair failed and we were unable to recover it. 00:22:24.221 [2024-05-15 01:09:36.572766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.221 [2024-05-15 01:09:36.572927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.221 [2024-05-15 01:09:36.572966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.221 qpair failed and we were unable to recover it. 00:22:24.221 [2024-05-15 01:09:36.573135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.221 [2024-05-15 01:09:36.573300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.221 [2024-05-15 01:09:36.573332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.221 qpair failed and we were unable to recover it. 00:22:24.221 [2024-05-15 01:09:36.573526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.221 [2024-05-15 01:09:36.574592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.221 [2024-05-15 01:09:36.574635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.221 qpair failed and we were unable to recover it. 00:22:24.221 [2024-05-15 01:09:36.574848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.221 [2024-05-15 01:09:36.575048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.221 [2024-05-15 01:09:36.575074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.221 qpair failed and we were unable to recover it. 00:22:24.221 [2024-05-15 01:09:36.575243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.221 [2024-05-15 01:09:36.575417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.221 [2024-05-15 01:09:36.575444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.221 qpair failed and we were unable to recover it. 00:22:24.221 [2024-05-15 01:09:36.575664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.221 [2024-05-15 01:09:36.575872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.221 [2024-05-15 01:09:36.575902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.221 qpair failed and we were unable to recover it. 00:22:24.221 [2024-05-15 01:09:36.576112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.221 [2024-05-15 01:09:36.576268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.221 [2024-05-15 01:09:36.576294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.221 qpair failed and we were unable to recover it. 00:22:24.221 [2024-05-15 01:09:36.576487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.221 [2024-05-15 01:09:36.576657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.221 [2024-05-15 01:09:36.576696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.221 qpair failed and we were unable to recover it. 00:22:24.221 [2024-05-15 01:09:36.576900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.221 [2024-05-15 01:09:36.577088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.221 [2024-05-15 01:09:36.577113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.221 qpair failed and we were unable to recover it. 00:22:24.221 [2024-05-15 01:09:36.577298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.221 [2024-05-15 01:09:36.577451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.221 [2024-05-15 01:09:36.577487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.221 qpair failed and we were unable to recover it. 00:22:24.501 [2024-05-15 01:09:36.577676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.577841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.577866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.501 qpair failed and we were unable to recover it. 00:22:24.501 [2024-05-15 01:09:36.578043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.578263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.578289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.501 qpair failed and we were unable to recover it. 00:22:24.501 [2024-05-15 01:09:36.578488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.578681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.578707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.501 qpair failed and we were unable to recover it. 00:22:24.501 [2024-05-15 01:09:36.578905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.579074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.579100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.501 qpair failed and we were unable to recover it. 00:22:24.501 [2024-05-15 01:09:36.579261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.579444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.579469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.501 qpair failed and we were unable to recover it. 00:22:24.501 [2024-05-15 01:09:36.579642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.579858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.579887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.501 qpair failed and we were unable to recover it. 00:22:24.501 [2024-05-15 01:09:36.580060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.580234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.580274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.501 qpair failed and we were unable to recover it. 00:22:24.501 [2024-05-15 01:09:36.580480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.580696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.580721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.501 qpair failed and we were unable to recover it. 00:22:24.501 [2024-05-15 01:09:36.580917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.581085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.581111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.501 qpair failed and we were unable to recover it. 00:22:24.501 [2024-05-15 01:09:36.581289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.581501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.581525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.501 qpair failed and we were unable to recover it. 00:22:24.501 [2024-05-15 01:09:36.581723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.581926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.581958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.501 qpair failed and we were unable to recover it. 00:22:24.501 [2024-05-15 01:09:36.582128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.582315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.582341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.501 qpair failed and we were unable to recover it. 00:22:24.501 [2024-05-15 01:09:36.582530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.582711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.582736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.501 qpair failed and we were unable to recover it. 00:22:24.501 [2024-05-15 01:09:36.583137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.583383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.583417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.501 qpair failed and we were unable to recover it. 00:22:24.501 [2024-05-15 01:09:36.583615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.583801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.583826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.501 qpair failed and we were unable to recover it. 00:22:24.501 [2024-05-15 01:09:36.583994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.584215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.584240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.501 qpair failed and we were unable to recover it. 00:22:24.501 [2024-05-15 01:09:36.584973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.585248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.585277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.501 qpair failed and we were unable to recover it. 00:22:24.501 [2024-05-15 01:09:36.585448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.585750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.585775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.501 qpair failed and we were unable to recover it. 00:22:24.501 [2024-05-15 01:09:36.585968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.586127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.586153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.501 qpair failed and we were unable to recover it. 00:22:24.501 [2024-05-15 01:09:36.586357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.586561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.586588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.501 qpair failed and we were unable to recover it. 00:22:24.501 [2024-05-15 01:09:36.586765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.586977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.501 [2024-05-15 01:09:36.587004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.501 qpair failed and we were unable to recover it. 00:22:24.502 [2024-05-15 01:09:36.587192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.587365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.587391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.502 qpair failed and we were unable to recover it. 00:22:24.502 [2024-05-15 01:09:36.587585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.587815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.587841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.502 qpair failed and we were unable to recover it. 00:22:24.502 [2024-05-15 01:09:36.588027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.588221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.588249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.502 qpair failed and we were unable to recover it. 00:22:24.502 [2024-05-15 01:09:36.588450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.588668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.588694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.502 qpair failed and we were unable to recover it. 00:22:24.502 [2024-05-15 01:09:36.588864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.589041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.589068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.502 qpair failed and we were unable to recover it. 00:22:24.502 [2024-05-15 01:09:36.589273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.589474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.589499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.502 qpair failed and we were unable to recover it. 00:22:24.502 [2024-05-15 01:09:36.589687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.589863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.589889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.502 qpair failed and we were unable to recover it. 00:22:24.502 [2024-05-15 01:09:36.590119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.590306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.590333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.502 qpair failed and we were unable to recover it. 00:22:24.502 [2024-05-15 01:09:36.590536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.590693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.590728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.502 qpair failed and we were unable to recover it. 00:22:24.502 [2024-05-15 01:09:36.590952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.591112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.591138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.502 qpair failed and we were unable to recover it. 00:22:24.502 [2024-05-15 01:09:36.591315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.591540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.591566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.502 qpair failed and we were unable to recover it. 00:22:24.502 [2024-05-15 01:09:36.591728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.591956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.591983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.502 qpair failed and we were unable to recover it. 00:22:24.502 [2024-05-15 01:09:36.592183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.592370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.592395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.502 qpair failed and we were unable to recover it. 00:22:24.502 [2024-05-15 01:09:36.592627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.592882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.592906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.502 qpair failed and we were unable to recover it. 00:22:24.502 [2024-05-15 01:09:36.593112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.593329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.593355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.502 qpair failed and we were unable to recover it. 00:22:24.502 [2024-05-15 01:09:36.593546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.593705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.593732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.502 qpair failed and we were unable to recover it. 00:22:24.502 [2024-05-15 01:09:36.593945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.594107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.594132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.502 qpair failed and we were unable to recover it. 00:22:24.502 [2024-05-15 01:09:36.594307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.594571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.594596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.502 qpair failed and we were unable to recover it. 00:22:24.502 [2024-05-15 01:09:36.595562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.596447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.596487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.502 qpair failed and we were unable to recover it. 00:22:24.502 [2024-05-15 01:09:36.597320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.597603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.597631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.502 qpair failed and we were unable to recover it. 00:22:24.502 [2024-05-15 01:09:36.597798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.597999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.598027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.502 qpair failed and we were unable to recover it. 00:22:24.502 [2024-05-15 01:09:36.598242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.598450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.598475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.502 qpair failed and we were unable to recover it. 00:22:24.502 [2024-05-15 01:09:36.598695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.598889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.598914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.502 qpair failed and we were unable to recover it. 00:22:24.502 [2024-05-15 01:09:36.599105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.599299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.599325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.502 qpair failed and we were unable to recover it. 00:22:24.502 [2024-05-15 01:09:36.599501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.599692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.599719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.502 qpair failed and we were unable to recover it. 00:22:24.502 [2024-05-15 01:09:36.599940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.600136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.600161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.502 qpair failed and we were unable to recover it. 00:22:24.502 [2024-05-15 01:09:36.600368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.600555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.600580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.502 qpair failed and we were unable to recover it. 00:22:24.502 [2024-05-15 01:09:36.600775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.600941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.600967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.502 qpair failed and we were unable to recover it. 00:22:24.502 [2024-05-15 01:09:36.601159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.601329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.601360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.502 qpair failed and we were unable to recover it. 00:22:24.502 [2024-05-15 01:09:36.601574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.502 [2024-05-15 01:09:36.601770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.601795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.503 qpair failed and we were unable to recover it. 00:22:24.503 [2024-05-15 01:09:36.601965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.602161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.602186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.503 qpair failed and we were unable to recover it. 00:22:24.503 [2024-05-15 01:09:36.602414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.602618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.602645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.503 qpair failed and we were unable to recover it. 00:22:24.503 [2024-05-15 01:09:36.602838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.603011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.603037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.503 qpair failed and we were unable to recover it. 00:22:24.503 [2024-05-15 01:09:36.603197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.603392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.603417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.503 qpair failed and we were unable to recover it. 00:22:24.503 [2024-05-15 01:09:36.603608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.603830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.603857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.503 qpair failed and we were unable to recover it. 00:22:24.503 [2024-05-15 01:09:36.604057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.604256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.604285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.503 qpair failed and we were unable to recover it. 00:22:24.503 [2024-05-15 01:09:36.604469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.604663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.604688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.503 qpair failed and we were unable to recover it. 00:22:24.503 [2024-05-15 01:09:36.604885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.605093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.605118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.503 qpair failed and we were unable to recover it. 00:22:24.503 [2024-05-15 01:09:36.605314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.605499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.605524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.503 qpair failed and we were unable to recover it. 00:22:24.503 [2024-05-15 01:09:36.605710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.605881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.605922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.503 qpair failed and we were unable to recover it. 00:22:24.503 [2024-05-15 01:09:36.606145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.606308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.606334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.503 qpair failed and we were unable to recover it. 00:22:24.503 [2024-05-15 01:09:36.606549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.606735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.606761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.503 qpair failed and we were unable to recover it. 00:22:24.503 [2024-05-15 01:09:36.606959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.607134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.607160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.503 qpair failed and we were unable to recover it. 00:22:24.503 [2024-05-15 01:09:36.607848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.608093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.608121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.503 qpair failed and we were unable to recover it. 00:22:24.503 [2024-05-15 01:09:36.608341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.608543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.608570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.503 qpair failed and we were unable to recover it. 00:22:24.503 [2024-05-15 01:09:36.608773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.608947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.608974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.503 qpair failed and we were unable to recover it. 00:22:24.503 [2024-05-15 01:09:36.609137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.609366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.609391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.503 qpair failed and we were unable to recover it. 00:22:24.503 [2024-05-15 01:09:36.609581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.609769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.609794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.503 qpair failed and we were unable to recover it. 00:22:24.503 [2024-05-15 01:09:36.609959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.610123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.610150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.503 qpair failed and we were unable to recover it. 00:22:24.503 [2024-05-15 01:09:36.610337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.610530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.610556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.503 qpair failed and we were unable to recover it. 00:22:24.503 [2024-05-15 01:09:36.610724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.610885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.610910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.503 qpair failed and we were unable to recover it. 00:22:24.503 [2024-05-15 01:09:36.611098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.611259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.611284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.503 qpair failed and we were unable to recover it. 00:22:24.503 [2024-05-15 01:09:36.611480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.611647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.611673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.503 qpair failed and we were unable to recover it. 00:22:24.503 [2024-05-15 01:09:36.611833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.612036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.612063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.503 qpair failed and we were unable to recover it. 00:22:24.503 [2024-05-15 01:09:36.612218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.612424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.612449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.503 qpair failed and we were unable to recover it. 00:22:24.503 [2024-05-15 01:09:36.612644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.612834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.612859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.503 qpair failed and we were unable to recover it. 00:22:24.503 [2024-05-15 01:09:36.613090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.613266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.613294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.503 qpair failed and we were unable to recover it. 00:22:24.503 [2024-05-15 01:09:36.613518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.613718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.613743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.503 qpair failed and we were unable to recover it. 00:22:24.503 [2024-05-15 01:09:36.613939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.614107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.503 [2024-05-15 01:09:36.614132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.503 qpair failed and we were unable to recover it. 00:22:24.503 [2024-05-15 01:09:36.614292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.614489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.614514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.504 qpair failed and we were unable to recover it. 00:22:24.504 [2024-05-15 01:09:36.614703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.614897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.614923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.504 qpair failed and we were unable to recover it. 00:22:24.504 [2024-05-15 01:09:36.615101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.615315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.615340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.504 qpair failed and we were unable to recover it. 00:22:24.504 [2024-05-15 01:09:36.615512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.615705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.615731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.504 qpair failed and we were unable to recover it. 00:22:24.504 [2024-05-15 01:09:36.615897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.616082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.616108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.504 qpair failed and we were unable to recover it. 00:22:24.504 [2024-05-15 01:09:36.616268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.616465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.616490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.504 qpair failed and we were unable to recover it. 00:22:24.504 [2024-05-15 01:09:36.616670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.616856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.616896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.504 qpair failed and we were unable to recover it. 00:22:24.504 [2024-05-15 01:09:36.617122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.617339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.617378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.504 qpair failed and we were unable to recover it. 00:22:24.504 [2024-05-15 01:09:36.617611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.617794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.617820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.504 qpair failed and we were unable to recover it. 00:22:24.504 [2024-05-15 01:09:36.618029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.618188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.618214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.504 qpair failed and we were unable to recover it. 00:22:24.504 [2024-05-15 01:09:36.618407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.618565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.618590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.504 qpair failed and we were unable to recover it. 00:22:24.504 [2024-05-15 01:09:36.618804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.619691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.619719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.504 qpair failed and we were unable to recover it. 00:22:24.504 [2024-05-15 01:09:36.619955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.620121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.620148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.504 qpair failed and we were unable to recover it. 00:22:24.504 [2024-05-15 01:09:36.620353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.620560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.620586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.504 qpair failed and we were unable to recover it. 00:22:24.504 [2024-05-15 01:09:36.620776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.620974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.621000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.504 qpair failed and we were unable to recover it. 00:22:24.504 [2024-05-15 01:09:36.621205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.621406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.621430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.504 qpair failed and we were unable to recover it. 00:22:24.504 [2024-05-15 01:09:36.621633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.621830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.621866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.504 qpair failed and we were unable to recover it. 00:22:24.504 [2024-05-15 01:09:36.622036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.622196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.622221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.504 qpair failed and we were unable to recover it. 00:22:24.504 [2024-05-15 01:09:36.622422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.622613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.622638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.504 qpair failed and we were unable to recover it. 00:22:24.504 [2024-05-15 01:09:36.622825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.623022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.623047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.504 qpair failed and we were unable to recover it. 00:22:24.504 [2024-05-15 01:09:36.623203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.623397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.623422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.504 qpair failed and we were unable to recover it. 00:22:24.504 [2024-05-15 01:09:36.623644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.623813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.623837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.504 qpair failed and we were unable to recover it. 00:22:24.504 [2024-05-15 01:09:36.623995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.624168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.624193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.504 qpair failed and we were unable to recover it. 00:22:24.504 [2024-05-15 01:09:36.624359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.624581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.624605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.504 qpair failed and we were unable to recover it. 00:22:24.504 [2024-05-15 01:09:36.624807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.624999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.625025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.504 qpair failed and we were unable to recover it. 00:22:24.504 [2024-05-15 01:09:36.625191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.625378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.625402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.504 qpair failed and we were unable to recover it. 00:22:24.504 [2024-05-15 01:09:36.625568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.625738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.625766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.504 qpair failed and we were unable to recover it. 00:22:24.504 [2024-05-15 01:09:36.625967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.626118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.626143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.504 qpair failed and we were unable to recover it. 00:22:24.504 [2024-05-15 01:09:36.626348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.626613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.626638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.504 qpair failed and we were unable to recover it. 00:22:24.504 [2024-05-15 01:09:36.626849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.627022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.504 [2024-05-15 01:09:36.627048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.504 qpair failed and we were unable to recover it. 00:22:24.504 [2024-05-15 01:09:36.627239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.627441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.627466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.505 qpair failed and we were unable to recover it. 00:22:24.505 [2024-05-15 01:09:36.627704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.627901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.627925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.505 qpair failed and we were unable to recover it. 00:22:24.505 [2024-05-15 01:09:36.628106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.628285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.628310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.505 qpair failed and we were unable to recover it. 00:22:24.505 [2024-05-15 01:09:36.628527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.628713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.628737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.505 qpair failed and we were unable to recover it. 00:22:24.505 [2024-05-15 01:09:36.628965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.629149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.629174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.505 qpair failed and we were unable to recover it. 00:22:24.505 [2024-05-15 01:09:36.629342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.629565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.629590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.505 qpair failed and we were unable to recover it. 00:22:24.505 [2024-05-15 01:09:36.629765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.629965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.629992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.505 qpair failed and we were unable to recover it. 00:22:24.505 [2024-05-15 01:09:36.630163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.630320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.630344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.505 qpair failed and we were unable to recover it. 00:22:24.505 [2024-05-15 01:09:36.630501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.630685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.630721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.505 qpair failed and we were unable to recover it. 00:22:24.505 [2024-05-15 01:09:36.630893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.631104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.631130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.505 qpair failed and we were unable to recover it. 00:22:24.505 [2024-05-15 01:09:36.631282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.631432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.631457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.505 qpair failed and we were unable to recover it. 00:22:24.505 [2024-05-15 01:09:36.631637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.631831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.631855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.505 qpair failed and we were unable to recover it. 00:22:24.505 [2024-05-15 01:09:36.632077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.632283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.632308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.505 qpair failed and we were unable to recover it. 00:22:24.505 [2024-05-15 01:09:36.632506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.632701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.632726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.505 qpair failed and we were unable to recover it. 00:22:24.505 [2024-05-15 01:09:36.632886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.633078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.633105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.505 qpair failed and we were unable to recover it. 00:22:24.505 [2024-05-15 01:09:36.633276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.633443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.633467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.505 qpair failed and we were unable to recover it. 00:22:24.505 [2024-05-15 01:09:36.633631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.633821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.633846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.505 qpair failed and we were unable to recover it. 00:22:24.505 [2024-05-15 01:09:36.634052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.634212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.634237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.505 qpair failed and we were unable to recover it. 00:22:24.505 [2024-05-15 01:09:36.634425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.634620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.634645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.505 qpair failed and we were unable to recover it. 00:22:24.505 [2024-05-15 01:09:36.634842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.635003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.635028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.505 qpair failed and we were unable to recover it. 00:22:24.505 [2024-05-15 01:09:36.635192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.635389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.635414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.505 qpair failed and we were unable to recover it. 00:22:24.505 [2024-05-15 01:09:36.635604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.635790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.635815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.505 qpair failed and we were unable to recover it. 00:22:24.505 [2024-05-15 01:09:36.635978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.636141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.636165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.505 qpair failed and we were unable to recover it. 00:22:24.505 [2024-05-15 01:09:36.636337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.636606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.636630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.505 qpair failed and we were unable to recover it. 00:22:24.505 [2024-05-15 01:09:36.636838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.505 [2024-05-15 01:09:36.637028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.637055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.506 qpair failed and we were unable to recover it. 00:22:24.506 [2024-05-15 01:09:36.637247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.638047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.638077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.506 qpair failed and we were unable to recover it. 00:22:24.506 [2024-05-15 01:09:36.638294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.638495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.638520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.506 qpair failed and we were unable to recover it. 00:22:24.506 [2024-05-15 01:09:36.638689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.638873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.638898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.506 qpair failed and we were unable to recover it. 00:22:24.506 [2024-05-15 01:09:36.639104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.639266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.639290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.506 qpair failed and we were unable to recover it. 00:22:24.506 [2024-05-15 01:09:36.639480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.639671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.639695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.506 qpair failed and we were unable to recover it. 00:22:24.506 [2024-05-15 01:09:36.639859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.640051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.640077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.506 qpair failed and we were unable to recover it. 00:22:24.506 [2024-05-15 01:09:36.640233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.640404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.640428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.506 qpair failed and we were unable to recover it. 00:22:24.506 [2024-05-15 01:09:36.640712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.640901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.640925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.506 qpair failed and we were unable to recover it. 00:22:24.506 [2024-05-15 01:09:36.641100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.641267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.641292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.506 qpair failed and we were unable to recover it. 00:22:24.506 [2024-05-15 01:09:36.641452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.641647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.641671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.506 qpair failed and we were unable to recover it. 00:22:24.506 [2024-05-15 01:09:36.641828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.642021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.642048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.506 qpair failed and we were unable to recover it. 00:22:24.506 [2024-05-15 01:09:36.642315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.642542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.642567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.506 qpair failed and we were unable to recover it. 00:22:24.506 [2024-05-15 01:09:36.642766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.642985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.643011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.506 qpair failed and we were unable to recover it. 00:22:24.506 [2024-05-15 01:09:36.643182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.643378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.643402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.506 qpair failed and we were unable to recover it. 00:22:24.506 [2024-05-15 01:09:36.643598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.643758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.643783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.506 qpair failed and we were unable to recover it. 00:22:24.506 [2024-05-15 01:09:36.643956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.644148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.644172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.506 qpair failed and we were unable to recover it. 00:22:24.506 [2024-05-15 01:09:36.644366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.644556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.644580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.506 qpair failed and we were unable to recover it. 00:22:24.506 [2024-05-15 01:09:36.644789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.644981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.645008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.506 qpair failed and we were unable to recover it. 00:22:24.506 [2024-05-15 01:09:36.645173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.645361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.645385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.506 qpair failed and we were unable to recover it. 00:22:24.506 [2024-05-15 01:09:36.645546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.645699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.645724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.506 qpair failed and we were unable to recover it. 00:22:24.506 [2024-05-15 01:09:36.645884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.646087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.646113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.506 qpair failed and we were unable to recover it. 00:22:24.506 [2024-05-15 01:09:36.646281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.646436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.646460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.506 qpair failed and we were unable to recover it. 00:22:24.506 [2024-05-15 01:09:36.646624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.646783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.646812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.506 qpair failed and we were unable to recover it. 00:22:24.506 [2024-05-15 01:09:36.646847] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:24.506 [2024-05-15 01:09:36.646882] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:24.506 [2024-05-15 01:09:36.646897] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:24.506 [2024-05-15 01:09:36.646910] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:24.506 [2024-05-15 01:09:36.646920] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:24.506 [2024-05-15 01:09:36.646975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.647029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:24.506 [2024-05-15 01:09:36.647078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:24.506 [2024-05-15 01:09:36.647139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.647163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.506 qpair failed and we were unable to recover it. 00:22:24.506 [2024-05-15 01:09:36.647111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:22:24.506 [2024-05-15 01:09:36.647114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:24.506 [2024-05-15 01:09:36.647341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.647527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.647553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.506 qpair failed and we were unable to recover it. 00:22:24.506 [2024-05-15 01:09:36.647716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.647919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.647953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.506 qpair failed and we were unable to recover it. 00:22:24.506 [2024-05-15 01:09:36.648148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.648318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.506 [2024-05-15 01:09:36.648342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.506 qpair failed and we were unable to recover it. 00:22:24.507 [2024-05-15 01:09:36.648502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.648685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.648710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.507 qpair failed and we were unable to recover it. 00:22:24.507 [2024-05-15 01:09:36.648875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.649045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.649070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.507 qpair failed and we were unable to recover it. 00:22:24.507 [2024-05-15 01:09:36.649241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.649406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.649430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.507 qpair failed and we were unable to recover it. 00:22:24.507 [2024-05-15 01:09:36.649616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.649822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.649852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.507 qpair failed and we were unable to recover it. 00:22:24.507 [2024-05-15 01:09:36.650023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.650182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.650210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.507 qpair failed and we were unable to recover it. 00:22:24.507 [2024-05-15 01:09:36.650388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.650554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.650579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.507 qpair failed and we were unable to recover it. 00:22:24.507 [2024-05-15 01:09:36.650740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.650908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.650940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.507 qpair failed and we were unable to recover it. 00:22:24.507 [2024-05-15 01:09:36.651112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.651279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.651312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.507 qpair failed and we were unable to recover it. 00:22:24.507 [2024-05-15 01:09:36.651508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.651664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.651689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.507 qpair failed and we were unable to recover it. 00:22:24.507 [2024-05-15 01:09:36.651874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.652053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.652079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.507 qpair failed and we were unable to recover it. 00:22:24.507 [2024-05-15 01:09:36.652262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.652433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.652458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.507 qpair failed and we were unable to recover it. 00:22:24.507 [2024-05-15 01:09:36.652626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.652969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.652996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.507 qpair failed and we were unable to recover it. 00:22:24.507 [2024-05-15 01:09:36.653172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.653354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.653379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.507 qpair failed and we were unable to recover it. 00:22:24.507 [2024-05-15 01:09:36.653544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.653711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.653736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.507 qpair failed and we were unable to recover it. 00:22:24.507 [2024-05-15 01:09:36.653958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.654146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.654172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.507 qpair failed and we were unable to recover it. 00:22:24.507 [2024-05-15 01:09:36.654387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.654588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.654614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.507 qpair failed and we were unable to recover it. 00:22:24.507 [2024-05-15 01:09:36.654861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.655028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.655056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.507 qpair failed and we were unable to recover it. 00:22:24.507 [2024-05-15 01:09:36.655233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.655451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.655476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.507 qpair failed and we were unable to recover it. 00:22:24.507 [2024-05-15 01:09:36.655649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.655812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.655838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.507 qpair failed and we were unable to recover it. 00:22:24.507 [2024-05-15 01:09:36.656069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.656246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.656271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.507 qpair failed and we were unable to recover it. 00:22:24.507 [2024-05-15 01:09:36.656472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.656660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.656686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.507 qpair failed and we were unable to recover it. 00:22:24.507 [2024-05-15 01:09:36.656845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.657028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.657054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.507 qpair failed and we were unable to recover it. 00:22:24.507 [2024-05-15 01:09:36.657250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.657436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.657463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.507 qpair failed and we were unable to recover it. 00:22:24.507 [2024-05-15 01:09:36.657640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.657805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.657834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.507 qpair failed and we were unable to recover it. 00:22:24.507 [2024-05-15 01:09:36.658047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.658252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.658277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.507 qpair failed and we were unable to recover it. 00:22:24.507 [2024-05-15 01:09:36.658465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.658635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.658659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.507 qpair failed and we were unable to recover it. 00:22:24.507 [2024-05-15 01:09:36.658855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.659024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.659050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.507 qpair failed and we were unable to recover it. 00:22:24.507 [2024-05-15 01:09:36.659224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.659434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.659458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.507 qpair failed and we were unable to recover it. 00:22:24.507 [2024-05-15 01:09:36.659661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.659852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.659877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.507 qpair failed and we were unable to recover it. 00:22:24.507 [2024-05-15 01:09:36.660055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.660223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.507 [2024-05-15 01:09:36.660249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.508 qpair failed and we were unable to recover it. 00:22:24.508 [2024-05-15 01:09:36.660416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.660602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.660626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.508 qpair failed and we were unable to recover it. 00:22:24.508 [2024-05-15 01:09:36.660793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.661005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.661032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.508 qpair failed and we were unable to recover it. 00:22:24.508 [2024-05-15 01:09:36.661199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.661486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.661511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.508 qpair failed and we were unable to recover it. 00:22:24.508 [2024-05-15 01:09:36.661704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.661872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.661897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.508 qpair failed and we were unable to recover it. 00:22:24.508 [2024-05-15 01:09:36.662090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.662248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.662273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.508 qpair failed and we were unable to recover it. 00:22:24.508 [2024-05-15 01:09:36.662448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.662603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.662627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.508 qpair failed and we were unable to recover it. 00:22:24.508 [2024-05-15 01:09:36.662793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.662996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.663022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.508 qpair failed and we were unable to recover it. 00:22:24.508 [2024-05-15 01:09:36.663208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.663391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.663418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.508 qpair failed and we were unable to recover it. 00:22:24.508 [2024-05-15 01:09:36.663577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.663794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.663820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.508 qpair failed and we were unable to recover it. 00:22:24.508 [2024-05-15 01:09:36.664008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.664171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.664196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.508 qpair failed and we were unable to recover it. 00:22:24.508 [2024-05-15 01:09:36.664372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.664531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.664558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.508 qpair failed and we were unable to recover it. 00:22:24.508 [2024-05-15 01:09:36.664749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.664917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.664951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.508 qpair failed and we were unable to recover it. 00:22:24.508 [2024-05-15 01:09:36.665124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.665297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.665322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.508 qpair failed and we were unable to recover it. 00:22:24.508 [2024-05-15 01:09:36.665490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.665650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.665677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.508 qpair failed and we were unable to recover it. 00:22:24.508 [2024-05-15 01:09:36.665855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.666065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.666091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.508 qpair failed and we were unable to recover it. 00:22:24.508 [2024-05-15 01:09:36.666259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.666424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.666450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.508 qpair failed and we were unable to recover it. 00:22:24.508 [2024-05-15 01:09:36.666609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.666800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.666826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.508 qpair failed and we were unable to recover it. 00:22:24.508 [2024-05-15 01:09:36.666992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.667155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.667180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.508 qpair failed and we were unable to recover it. 00:22:24.508 [2024-05-15 01:09:36.667355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.667524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.667549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.508 qpair failed and we were unable to recover it. 00:22:24.508 [2024-05-15 01:09:36.667708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.667865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.667889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.508 qpair failed and we were unable to recover it. 00:22:24.508 [2024-05-15 01:09:36.668211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.668403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.668428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.508 qpair failed and we were unable to recover it. 00:22:24.508 [2024-05-15 01:09:36.668611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.668820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.668845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.508 qpair failed and we were unable to recover it. 00:22:24.508 [2024-05-15 01:09:36.669066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.669237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.669266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.508 qpair failed and we were unable to recover it. 00:22:24.508 [2024-05-15 01:09:36.669459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.669617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.669642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.508 qpair failed and we were unable to recover it. 00:22:24.508 [2024-05-15 01:09:36.669817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.669986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.670012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.508 qpair failed and we were unable to recover it. 00:22:24.508 [2024-05-15 01:09:36.670223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.670382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.670406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.508 qpair failed and we were unable to recover it. 00:22:24.508 [2024-05-15 01:09:36.670581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.670778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.670805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.508 qpair failed and we were unable to recover it. 00:22:24.508 [2024-05-15 01:09:36.670974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.671146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.671172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.508 qpair failed and we were unable to recover it. 00:22:24.508 [2024-05-15 01:09:36.671370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.671582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.671608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.508 qpair failed and we were unable to recover it. 00:22:24.508 [2024-05-15 01:09:36.671803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.671989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.508 [2024-05-15 01:09:36.672016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.509 qpair failed and we were unable to recover it. 00:22:24.509 [2024-05-15 01:09:36.672179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.672357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.672382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.509 qpair failed and we were unable to recover it. 00:22:24.509 [2024-05-15 01:09:36.672570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.672732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.672757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.509 qpair failed and we were unable to recover it. 00:22:24.509 [2024-05-15 01:09:36.672949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.673143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.673168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.509 qpair failed and we were unable to recover it. 00:22:24.509 [2024-05-15 01:09:36.673333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.673496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.673525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.509 qpair failed and we were unable to recover it. 00:22:24.509 [2024-05-15 01:09:36.673685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.673909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.673945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.509 qpair failed and we were unable to recover it. 00:22:24.509 [2024-05-15 01:09:36.674129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.674326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.674352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.509 qpair failed and we were unable to recover it. 00:22:24.509 [2024-05-15 01:09:36.674524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.674683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.674708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.509 qpair failed and we were unable to recover it. 00:22:24.509 [2024-05-15 01:09:36.674923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.675124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.675149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.509 qpair failed and we were unable to recover it. 00:22:24.509 [2024-05-15 01:09:36.675313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.675504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.675529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.509 qpair failed and we were unable to recover it. 00:22:24.509 [2024-05-15 01:09:36.675751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.675923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.675955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.509 qpair failed and we were unable to recover it. 00:22:24.509 [2024-05-15 01:09:36.676132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.676285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.676311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.509 qpair failed and we were unable to recover it. 00:22:24.509 [2024-05-15 01:09:36.676482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.676651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.676676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.509 qpair failed and we were unable to recover it. 00:22:24.509 [2024-05-15 01:09:36.676870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.677077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.677103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.509 qpair failed and we were unable to recover it. 00:22:24.509 [2024-05-15 01:09:36.677263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.677443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.677468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.509 qpair failed and we were unable to recover it. 00:22:24.509 [2024-05-15 01:09:36.677632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.677891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.677923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.509 qpair failed and we were unable to recover it. 00:22:24.509 [2024-05-15 01:09:36.678111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.678278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.678302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.509 qpair failed and we were unable to recover it. 00:22:24.509 [2024-05-15 01:09:36.678484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.678667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.678692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.509 qpair failed and we were unable to recover it. 00:22:24.509 [2024-05-15 01:09:36.678850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.679017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.679044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.509 qpair failed and we were unable to recover it. 00:22:24.509 [2024-05-15 01:09:36.679237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.679403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.679428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.509 qpair failed and we were unable to recover it. 00:22:24.509 [2024-05-15 01:09:36.679621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.679837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.679862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.509 qpair failed and we were unable to recover it. 00:22:24.509 [2024-05-15 01:09:36.680043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.680210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.680235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.509 qpair failed and we were unable to recover it. 00:22:24.509 [2024-05-15 01:09:36.680459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.680627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.680653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.509 qpair failed and we were unable to recover it. 00:22:24.509 [2024-05-15 01:09:36.680811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.680976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.681002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.509 qpair failed and we were unable to recover it. 00:22:24.509 [2024-05-15 01:09:36.681171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.681498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.681522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.509 qpair failed and we were unable to recover it. 00:22:24.509 [2024-05-15 01:09:36.681708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.681861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.681886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.509 qpair failed and we were unable to recover it. 00:22:24.509 [2024-05-15 01:09:36.682068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.682227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.682251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.509 qpair failed and we were unable to recover it. 00:22:24.509 [2024-05-15 01:09:36.682479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.682664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.682689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.509 qpair failed and we were unable to recover it. 00:22:24.509 [2024-05-15 01:09:36.682851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.683017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.683044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.509 qpair failed and we were unable to recover it. 00:22:24.509 [2024-05-15 01:09:36.683355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.683546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.683572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.509 qpair failed and we were unable to recover it. 00:22:24.509 [2024-05-15 01:09:36.683726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.683908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.509 [2024-05-15 01:09:36.683940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.510 qpair failed and we were unable to recover it. 00:22:24.510 [2024-05-15 01:09:36.684145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.684324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.684352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.510 qpair failed and we were unable to recover it. 00:22:24.510 [2024-05-15 01:09:36.684530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.684720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.684745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.510 qpair failed and we were unable to recover it. 00:22:24.510 [2024-05-15 01:09:36.685010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.685205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.685232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.510 qpair failed and we were unable to recover it. 00:22:24.510 [2024-05-15 01:09:36.685394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.685585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.685609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.510 qpair failed and we were unable to recover it. 00:22:24.510 [2024-05-15 01:09:36.685801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.686000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.686026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.510 qpair failed and we were unable to recover it. 00:22:24.510 [2024-05-15 01:09:36.686199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.686364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.686391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.510 qpair failed and we were unable to recover it. 00:22:24.510 [2024-05-15 01:09:36.686559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.686754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.686781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.510 qpair failed and we were unable to recover it. 00:22:24.510 [2024-05-15 01:09:36.686944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.687133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.687157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.510 qpair failed and we were unable to recover it. 00:22:24.510 [2024-05-15 01:09:36.687374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.687560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.687585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.510 qpair failed and we were unable to recover it. 00:22:24.510 [2024-05-15 01:09:36.687747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.687916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.687947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.510 qpair failed and we were unable to recover it. 00:22:24.510 [2024-05-15 01:09:36.688165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.688373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.688400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.510 qpair failed and we were unable to recover it. 00:22:24.510 [2024-05-15 01:09:36.688592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.688774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.688798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.510 qpair failed and we were unable to recover it. 00:22:24.510 [2024-05-15 01:09:36.689001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.689175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.689200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.510 qpair failed and we were unable to recover it. 00:22:24.510 [2024-05-15 01:09:36.689381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.689546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.689570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.510 qpair failed and we were unable to recover it. 00:22:24.510 [2024-05-15 01:09:36.689754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.689906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.689937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.510 qpair failed and we were unable to recover it. 00:22:24.510 [2024-05-15 01:09:36.690111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.690281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.690309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.510 qpair failed and we were unable to recover it. 00:22:24.510 [2024-05-15 01:09:36.690464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.690632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.690660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.510 qpair failed and we were unable to recover it. 00:22:24.510 [2024-05-15 01:09:36.690810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.690994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.691019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.510 qpair failed and we were unable to recover it. 00:22:24.510 [2024-05-15 01:09:36.691173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.691350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.691376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.510 qpair failed and we were unable to recover it. 00:22:24.510 [2024-05-15 01:09:36.691567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.691723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.691748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.510 qpair failed and we were unable to recover it. 00:22:24.510 [2024-05-15 01:09:36.691910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.692084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.692109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.510 qpair failed and we were unable to recover it. 00:22:24.510 [2024-05-15 01:09:36.692275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.692426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.692451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.510 qpair failed and we were unable to recover it. 00:22:24.510 [2024-05-15 01:09:36.692762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.692973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.693000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.510 qpair failed and we were unable to recover it. 00:22:24.510 [2024-05-15 01:09:36.693166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.693353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.693378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.510 qpair failed and we were unable to recover it. 00:22:24.510 [2024-05-15 01:09:36.693543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.693716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.510 [2024-05-15 01:09:36.693741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.511 qpair failed and we were unable to recover it. 00:22:24.511 [2024-05-15 01:09:36.693935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.694147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.694172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.511 qpair failed and we were unable to recover it. 00:22:24.511 [2024-05-15 01:09:36.694351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.694543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.694568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.511 qpair failed and we were unable to recover it. 00:22:24.511 [2024-05-15 01:09:36.694746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.694905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.694935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.511 qpair failed and we were unable to recover it. 00:22:24.511 [2024-05-15 01:09:36.695120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.695279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.695304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.511 qpair failed and we were unable to recover it. 00:22:24.511 [2024-05-15 01:09:36.695499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.695686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.695713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.511 qpair failed and we were unable to recover it. 00:22:24.511 [2024-05-15 01:09:36.695876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.696082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.696107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.511 qpair failed and we were unable to recover it. 00:22:24.511 [2024-05-15 01:09:36.696267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.696431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.696456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.511 qpair failed and we were unable to recover it. 00:22:24.511 [2024-05-15 01:09:36.696649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.696909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.696941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.511 qpair failed and we were unable to recover it. 00:22:24.511 [2024-05-15 01:09:36.697109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.697298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.697324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.511 qpair failed and we were unable to recover it. 00:22:24.511 [2024-05-15 01:09:36.697480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.697664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.697688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.511 qpair failed and we were unable to recover it. 00:22:24.511 [2024-05-15 01:09:36.697872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.698078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.698109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.511 qpair failed and we were unable to recover it. 00:22:24.511 [2024-05-15 01:09:36.698313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.698474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.698500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.511 qpair failed and we were unable to recover it. 00:22:24.511 [2024-05-15 01:09:36.698665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.698828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.698853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.511 qpair failed and we were unable to recover it. 00:22:24.511 [2024-05-15 01:09:36.699049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.699241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.699266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.511 qpair failed and we were unable to recover it. 00:22:24.511 [2024-05-15 01:09:36.699433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.699618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.699642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.511 qpair failed and we were unable to recover it. 00:22:24.511 [2024-05-15 01:09:36.699856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.700041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.700069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.511 qpair failed and we were unable to recover it. 00:22:24.511 [2024-05-15 01:09:36.700227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.700389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.700413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.511 qpair failed and we were unable to recover it. 00:22:24.511 [2024-05-15 01:09:36.700606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.700772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.700799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.511 qpair failed and we were unable to recover it. 00:22:24.511 [2024-05-15 01:09:36.700967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.701265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.701291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.511 qpair failed and we were unable to recover it. 00:22:24.511 [2024-05-15 01:09:36.701488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.701642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.701666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.511 qpair failed and we were unable to recover it. 00:22:24.511 [2024-05-15 01:09:36.701856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.702023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.702053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.511 qpair failed and we were unable to recover it. 00:22:24.511 [2024-05-15 01:09:36.702210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.702425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.702449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.511 qpair failed and we were unable to recover it. 00:22:24.511 [2024-05-15 01:09:36.702628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.702797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.702822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.511 qpair failed and we were unable to recover it. 00:22:24.511 [2024-05-15 01:09:36.702992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.703179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.703203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.511 qpair failed and we were unable to recover it. 00:22:24.511 [2024-05-15 01:09:36.703388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.703556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.703580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.511 qpair failed and we were unable to recover it. 00:22:24.511 [2024-05-15 01:09:36.703780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.703966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.703991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.511 qpair failed and we were unable to recover it. 00:22:24.511 [2024-05-15 01:09:36.704152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.704314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.704339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.511 qpair failed and we were unable to recover it. 00:22:24.511 [2024-05-15 01:09:36.704502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.704672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.704697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.511 qpair failed and we were unable to recover it. 00:22:24.511 [2024-05-15 01:09:36.704862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.705038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.705063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.511 qpair failed and we were unable to recover it. 00:22:24.511 [2024-05-15 01:09:36.705232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.705421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.511 [2024-05-15 01:09:36.705446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.512 qpair failed and we were unable to recover it. 00:22:24.512 [2024-05-15 01:09:36.705636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.705825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.705851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.512 qpair failed and we were unable to recover it. 00:22:24.512 [2024-05-15 01:09:36.706049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.706205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.706230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.512 qpair failed and we were unable to recover it. 00:22:24.512 [2024-05-15 01:09:36.706395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.706582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.706607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.512 qpair failed and we were unable to recover it. 00:22:24.512 [2024-05-15 01:09:36.706765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.706921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.706952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.512 qpair failed and we were unable to recover it. 00:22:24.512 [2024-05-15 01:09:36.707115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.707269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.707296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.512 qpair failed and we were unable to recover it. 00:22:24.512 [2024-05-15 01:09:36.707481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.707645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.707669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.512 qpair failed and we were unable to recover it. 00:22:24.512 [2024-05-15 01:09:36.707831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.707997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.708023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.512 qpair failed and we were unable to recover it. 00:22:24.512 [2024-05-15 01:09:36.708184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.708371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.708396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.512 qpair failed and we were unable to recover it. 00:22:24.512 [2024-05-15 01:09:36.708557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.708710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.708735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.512 qpair failed and we were unable to recover it. 00:22:24.512 [2024-05-15 01:09:36.708897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.709073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.709099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.512 qpair failed and we were unable to recover it. 00:22:24.512 [2024-05-15 01:09:36.709408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.709593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.709618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.512 qpair failed and we were unable to recover it. 00:22:24.512 [2024-05-15 01:09:36.709774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.709955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.709983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.512 qpair failed and we were unable to recover it. 00:22:24.512 [2024-05-15 01:09:36.710152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.710313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.710339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.512 qpair failed and we were unable to recover it. 00:22:24.512 [2024-05-15 01:09:36.710509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.710690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.710715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.512 qpair failed and we were unable to recover it. 00:22:24.512 [2024-05-15 01:09:36.710873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.711066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.711092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.512 qpair failed and we were unable to recover it. 00:22:24.512 [2024-05-15 01:09:36.711263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.711424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.711451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.512 qpair failed and we were unable to recover it. 00:22:24.512 [2024-05-15 01:09:36.711624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.711796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.711822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.512 qpair failed and we were unable to recover it. 00:22:24.512 [2024-05-15 01:09:36.711987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.712147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.712172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.512 qpair failed and we were unable to recover it. 00:22:24.512 [2024-05-15 01:09:36.712366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.712567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.712592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.512 qpair failed and we were unable to recover it. 00:22:24.512 [2024-05-15 01:09:36.712781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.712961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.712986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.512 qpair failed and we were unable to recover it. 00:22:24.512 [2024-05-15 01:09:36.713169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.713331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.713355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.512 qpair failed and we were unable to recover it. 00:22:24.512 [2024-05-15 01:09:36.713567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.713734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.713761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.512 qpair failed and we were unable to recover it. 00:22:24.512 [2024-05-15 01:09:36.713956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.714129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.714153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.512 qpair failed and we were unable to recover it. 00:22:24.512 [2024-05-15 01:09:36.714321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.714503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.714528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.512 qpair failed and we were unable to recover it. 00:22:24.512 [2024-05-15 01:09:36.714721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.714872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.714896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.512 qpair failed and we were unable to recover it. 00:22:24.512 [2024-05-15 01:09:36.715237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.715453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.715478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.512 qpair failed and we were unable to recover it. 00:22:24.512 [2024-05-15 01:09:36.715665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.715842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.715866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.512 qpair failed and we were unable to recover it. 00:22:24.512 [2024-05-15 01:09:36.716232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.716435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.716464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.512 qpair failed and we were unable to recover it. 00:22:24.512 [2024-05-15 01:09:36.716666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.716835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.716859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.512 qpair failed and we were unable to recover it. 00:22:24.512 [2024-05-15 01:09:36.717037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.717200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.512 [2024-05-15 01:09:36.717226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.513 qpair failed and we were unable to recover it. 00:22:24.513 [2024-05-15 01:09:36.717383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.717566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.717591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.513 qpair failed and we were unable to recover it. 00:22:24.513 [2024-05-15 01:09:36.717754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.717959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.717985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.513 qpair failed and we were unable to recover it. 00:22:24.513 [2024-05-15 01:09:36.718142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.718302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.718328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.513 qpair failed and we were unable to recover it. 00:22:24.513 [2024-05-15 01:09:36.718506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.718670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.718695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.513 qpair failed and we were unable to recover it. 00:22:24.513 [2024-05-15 01:09:36.718904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.719068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.719093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.513 qpair failed and we were unable to recover it. 00:22:24.513 [2024-05-15 01:09:36.719256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.719412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.719437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.513 qpair failed and we were unable to recover it. 00:22:24.513 [2024-05-15 01:09:36.719603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.719791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.719818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.513 qpair failed and we were unable to recover it. 00:22:24.513 [2024-05-15 01:09:36.720004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.720163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.720188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.513 qpair failed and we were unable to recover it. 00:22:24.513 [2024-05-15 01:09:36.720343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.720508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.720535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.513 qpair failed and we were unable to recover it. 00:22:24.513 [2024-05-15 01:09:36.720699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.720856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.720882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.513 qpair failed and we were unable to recover it. 00:22:24.513 [2024-05-15 01:09:36.721067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.721222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.721246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.513 qpair failed and we were unable to recover it. 00:22:24.513 [2024-05-15 01:09:36.721401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.721550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.721579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.513 qpair failed and we were unable to recover it. 00:22:24.513 [2024-05-15 01:09:36.721740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.721924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.721957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.513 qpair failed and we were unable to recover it. 00:22:24.513 [2024-05-15 01:09:36.722145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.722300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.722326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.513 qpair failed and we were unable to recover it. 00:22:24.513 [2024-05-15 01:09:36.722519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.722697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.722721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.513 qpair failed and we were unable to recover it. 00:22:24.513 [2024-05-15 01:09:36.722881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.723043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.723069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.513 qpair failed and we were unable to recover it. 00:22:24.513 [2024-05-15 01:09:36.723256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.723479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.723504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.513 qpair failed and we were unable to recover it. 00:22:24.513 [2024-05-15 01:09:36.723657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.723845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.723870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.513 qpair failed and we were unable to recover it. 00:22:24.513 [2024-05-15 01:09:36.724030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.724187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.724211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.513 qpair failed and we were unable to recover it. 00:22:24.513 [2024-05-15 01:09:36.724381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.724535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.724560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.513 qpair failed and we were unable to recover it. 00:22:24.513 [2024-05-15 01:09:36.724719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.724909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.724941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.513 qpair failed and we were unable to recover it. 00:22:24.513 [2024-05-15 01:09:36.725099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.725268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.725293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.513 qpair failed and we were unable to recover it. 00:22:24.513 [2024-05-15 01:09:36.725464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.725634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.725658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.513 qpair failed and we were unable to recover it. 00:22:24.513 [2024-05-15 01:09:36.725836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.726030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.726056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.513 qpair failed and we were unable to recover it. 00:22:24.513 [2024-05-15 01:09:36.726251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.726411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.726436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.513 qpair failed and we were unable to recover it. 00:22:24.513 [2024-05-15 01:09:36.726605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.726757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.726782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.513 qpair failed and we were unable to recover it. 00:22:24.513 [2024-05-15 01:09:36.726971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.727132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.727157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.513 qpair failed and we were unable to recover it. 00:22:24.513 [2024-05-15 01:09:36.727346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.727539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.727563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.513 qpair failed and we were unable to recover it. 00:22:24.513 [2024-05-15 01:09:36.727716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.727897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.727921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.513 qpair failed and we were unable to recover it. 00:22:24.513 [2024-05-15 01:09:36.728096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.728248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.513 [2024-05-15 01:09:36.728272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.514 qpair failed and we were unable to recover it. 00:22:24.514 [2024-05-15 01:09:36.728450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.728619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.728643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.514 qpair failed and we were unable to recover it. 00:22:24.514 [2024-05-15 01:09:36.728824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.728981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.729007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.514 qpair failed and we were unable to recover it. 00:22:24.514 [2024-05-15 01:09:36.729171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.729485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.729525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.514 qpair failed and we were unable to recover it. 00:22:24.514 [2024-05-15 01:09:36.729727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.729882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.729907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.514 qpair failed and we were unable to recover it. 00:22:24.514 [2024-05-15 01:09:36.730087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.730289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.730313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.514 qpair failed and we were unable to recover it. 00:22:24.514 [2024-05-15 01:09:36.730477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.730631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.730656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.514 qpair failed and we were unable to recover it. 00:22:24.514 [2024-05-15 01:09:36.730839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.730999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.731025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.514 qpair failed and we were unable to recover it. 00:22:24.514 [2024-05-15 01:09:36.731189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.731375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.731400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.514 qpair failed and we were unable to recover it. 00:22:24.514 [2024-05-15 01:09:36.731554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.731709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.731734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.514 qpair failed and we were unable to recover it. 00:22:24.514 [2024-05-15 01:09:36.731900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.732075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.732100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.514 qpair failed and we were unable to recover it. 00:22:24.514 [2024-05-15 01:09:36.732286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.732452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.732476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.514 qpair failed and we were unable to recover it. 00:22:24.514 [2024-05-15 01:09:36.732642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.732801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.732827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.514 qpair failed and we were unable to recover it. 00:22:24.514 [2024-05-15 01:09:36.733019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.733184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.733209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.514 qpair failed and we were unable to recover it. 00:22:24.514 [2024-05-15 01:09:36.733378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.733539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.733565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.514 qpair failed and we were unable to recover it. 00:22:24.514 [2024-05-15 01:09:36.733729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.733918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.733950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.514 qpair failed and we were unable to recover it. 00:22:24.514 [2024-05-15 01:09:36.734117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.734287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.734311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.514 qpair failed and we were unable to recover it. 00:22:24.514 [2024-05-15 01:09:36.734471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.734627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.734651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.514 qpair failed and we were unable to recover it. 00:22:24.514 [2024-05-15 01:09:36.734867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.735028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.735055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.514 qpair failed and we were unable to recover it. 00:22:24.514 [2024-05-15 01:09:36.735219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.735400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.735424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.514 qpair failed and we were unable to recover it. 00:22:24.514 [2024-05-15 01:09:36.735637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.735793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.735817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.514 qpair failed and we were unable to recover it. 00:22:24.514 [2024-05-15 01:09:36.736000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.736154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.736178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.514 qpair failed and we were unable to recover it. 00:22:24.514 [2024-05-15 01:09:36.736336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.736497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.736522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.514 qpair failed and we were unable to recover it. 00:22:24.514 [2024-05-15 01:09:36.736711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.736896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.736921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.514 qpair failed and we were unable to recover it. 00:22:24.514 [2024-05-15 01:09:36.737116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.737270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.737295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.514 qpair failed and we were unable to recover it. 00:22:24.514 [2024-05-15 01:09:36.737460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.737643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.514 [2024-05-15 01:09:36.737667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.514 qpair failed and we were unable to recover it. 00:22:24.515 [2024-05-15 01:09:36.737850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.738036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.738061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.515 qpair failed and we were unable to recover it. 00:22:24.515 [2024-05-15 01:09:36.738244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.738426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.738450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.515 qpair failed and we were unable to recover it. 00:22:24.515 [2024-05-15 01:09:36.738646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.738834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.738858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.515 qpair failed and we were unable to recover it. 00:22:24.515 [2024-05-15 01:09:36.739012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.739204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.739229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.515 qpair failed and we were unable to recover it. 00:22:24.515 [2024-05-15 01:09:36.739384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.739573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.739597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.515 qpair failed and we were unable to recover it. 00:22:24.515 [2024-05-15 01:09:36.739779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.740000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.740025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.515 qpair failed and we were unable to recover it. 00:22:24.515 [2024-05-15 01:09:36.740182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.740352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.740376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.515 qpair failed and we were unable to recover it. 00:22:24.515 [2024-05-15 01:09:36.740538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.740696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.740725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.515 qpair failed and we were unable to recover it. 00:22:24.515 [2024-05-15 01:09:36.740882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.741072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.741097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.515 qpair failed and we were unable to recover it. 00:22:24.515 [2024-05-15 01:09:36.741275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.741442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.741466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.515 qpair failed and we were unable to recover it. 00:22:24.515 [2024-05-15 01:09:36.741624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.741811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.741836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.515 qpair failed and we were unable to recover it. 00:22:24.515 [2024-05-15 01:09:36.741994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.742192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.742216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.515 qpair failed and we were unable to recover it. 00:22:24.515 [2024-05-15 01:09:36.742440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.742655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.742680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.515 qpair failed and we were unable to recover it. 00:22:24.515 [2024-05-15 01:09:36.742841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.743024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.743050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.515 qpair failed and we were unable to recover it. 00:22:24.515 [2024-05-15 01:09:36.743231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.743384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.743409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.515 qpair failed and we were unable to recover it. 00:22:24.515 [2024-05-15 01:09:36.743573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.743736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.743763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.515 qpair failed and we were unable to recover it. 00:22:24.515 [2024-05-15 01:09:36.743972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.744139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.744164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.515 qpair failed and we were unable to recover it. 00:22:24.515 [2024-05-15 01:09:36.744329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.744522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.744546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.515 qpair failed and we were unable to recover it. 00:22:24.515 [2024-05-15 01:09:36.744740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.744896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.744920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.515 qpair failed and we were unable to recover it. 00:22:24.515 [2024-05-15 01:09:36.745086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.745249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.745273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.515 qpair failed and we were unable to recover it. 00:22:24.515 [2024-05-15 01:09:36.745458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.745618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.745643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.515 qpair failed and we were unable to recover it. 00:22:24.515 [2024-05-15 01:09:36.745800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.745959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.745984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.515 qpair failed and we were unable to recover it. 00:22:24.515 [2024-05-15 01:09:36.746142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.746340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.746364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.515 qpair failed and we were unable to recover it. 00:22:24.515 [2024-05-15 01:09:36.746547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.746702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.746726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.515 qpair failed and we were unable to recover it. 00:22:24.515 [2024-05-15 01:09:36.746915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.747082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.747108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.515 qpair failed and we were unable to recover it. 00:22:24.515 [2024-05-15 01:09:36.747316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.747528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.747553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.515 qpair failed and we were unable to recover it. 00:22:24.515 [2024-05-15 01:09:36.747712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.747904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.747928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.515 qpair failed and we were unable to recover it. 00:22:24.515 [2024-05-15 01:09:36.748094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.748269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.748295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.515 qpair failed and we were unable to recover it. 00:22:24.515 [2024-05-15 01:09:36.748460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.748611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.748636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.515 qpair failed and we were unable to recover it. 00:22:24.515 [2024-05-15 01:09:36.748810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.749004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.515 [2024-05-15 01:09:36.749030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.516 qpair failed and we were unable to recover it. 00:22:24.516 [2024-05-15 01:09:36.749188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.749382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.749409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.516 qpair failed and we were unable to recover it. 00:22:24.516 [2024-05-15 01:09:36.749578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.749759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.749783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.516 qpair failed and we were unable to recover it. 00:22:24.516 [2024-05-15 01:09:36.749948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.750144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.750169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.516 qpair failed and we were unable to recover it. 00:22:24.516 [2024-05-15 01:09:36.750346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.750565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.750589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.516 qpair failed and we were unable to recover it. 00:22:24.516 [2024-05-15 01:09:36.750745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.750936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.750961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.516 qpair failed and we were unable to recover it. 00:22:24.516 [2024-05-15 01:09:36.751126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.751281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.751305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.516 qpair failed and we were unable to recover it. 00:22:24.516 [2024-05-15 01:09:36.751458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.751634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.751659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.516 qpair failed and we were unable to recover it. 00:22:24.516 [2024-05-15 01:09:36.751831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.752018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.752043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.516 qpair failed and we were unable to recover it. 00:22:24.516 [2024-05-15 01:09:36.752261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.752462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.752489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.516 qpair failed and we were unable to recover it. 00:22:24.516 [2024-05-15 01:09:36.752679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.752860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.752885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.516 qpair failed and we were unable to recover it. 00:22:24.516 [2024-05-15 01:09:36.753087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.753250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.753275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.516 qpair failed and we were unable to recover it. 00:22:24.516 [2024-05-15 01:09:36.753468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.753657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.753682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.516 qpair failed and we were unable to recover it. 00:22:24.516 [2024-05-15 01:09:36.753840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.754007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.754032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.516 qpair failed and we were unable to recover it. 00:22:24.516 [2024-05-15 01:09:36.754197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.754393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.754419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.516 qpair failed and we were unable to recover it. 00:22:24.516 [2024-05-15 01:09:36.754602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.754751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.754776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.516 qpair failed and we were unable to recover it. 00:22:24.516 [2024-05-15 01:09:36.754961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.755161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.755186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.516 qpair failed and we were unable to recover it. 00:22:24.516 [2024-05-15 01:09:36.755369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.755555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.755580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.516 qpair failed and we were unable to recover it. 00:22:24.516 [2024-05-15 01:09:36.755746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.755958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.755984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.516 qpair failed and we were unable to recover it. 00:22:24.516 [2024-05-15 01:09:36.756181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.756368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.756392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.516 qpair failed and we were unable to recover it. 00:22:24.516 [2024-05-15 01:09:36.756598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.756814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.756839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.516 qpair failed and we were unable to recover it. 00:22:24.516 [2024-05-15 01:09:36.757004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.757188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.757213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.516 qpair failed and we were unable to recover it. 00:22:24.516 [2024-05-15 01:09:36.757371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.757555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.757581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.516 qpair failed and we were unable to recover it. 00:22:24.516 [2024-05-15 01:09:36.757784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.757976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.516 [2024-05-15 01:09:36.758002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.516 qpair failed and we were unable to recover it. 00:22:24.516 [2024-05-15 01:09:36.758167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.758328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.758354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.517 qpair failed and we were unable to recover it. 00:22:24.517 [2024-05-15 01:09:36.758578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.758729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.758754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.517 qpair failed and we were unable to recover it. 00:22:24.517 [2024-05-15 01:09:36.758941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.759148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.759174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.517 qpair failed and we were unable to recover it. 00:22:24.517 [2024-05-15 01:09:36.759357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.759544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.759570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.517 qpair failed and we were unable to recover it. 00:22:24.517 [2024-05-15 01:09:36.759758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.759915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.759947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.517 qpair failed and we were unable to recover it. 00:22:24.517 [2024-05-15 01:09:36.760138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.760302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.760328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.517 qpair failed and we were unable to recover it. 00:22:24.517 [2024-05-15 01:09:36.760493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.760681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.760706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.517 qpair failed and we were unable to recover it. 00:22:24.517 [2024-05-15 01:09:36.760868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.761037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.761064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.517 qpair failed and we were unable to recover it. 00:22:24.517 [2024-05-15 01:09:36.761235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.761425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.761451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.517 qpair failed and we were unable to recover it. 00:22:24.517 [2024-05-15 01:09:36.761646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.761861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.761886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.517 qpair failed and we were unable to recover it. 00:22:24.517 [2024-05-15 01:09:36.762051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.762243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.762269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.517 qpair failed and we were unable to recover it. 00:22:24.517 [2024-05-15 01:09:36.762428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.762583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.762609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.517 qpair failed and we were unable to recover it. 00:22:24.517 [2024-05-15 01:09:36.762767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.762925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.762956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.517 qpair failed and we were unable to recover it. 00:22:24.517 [2024-05-15 01:09:36.763172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.763361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.763385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.517 qpair failed and we were unable to recover it. 00:22:24.517 [2024-05-15 01:09:36.763573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.763739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.763763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.517 qpair failed and we were unable to recover it. 00:22:24.517 [2024-05-15 01:09:36.763916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.764111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.764136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.517 qpair failed and we were unable to recover it. 00:22:24.517 [2024-05-15 01:09:36.764299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.764491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.764516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.517 qpair failed and we were unable to recover it. 00:22:24.517 [2024-05-15 01:09:36.764671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.764861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.764886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.517 qpair failed and we were unable to recover it. 00:22:24.517 [2024-05-15 01:09:36.765094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.765296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.765321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.517 qpair failed and we were unable to recover it. 00:22:24.517 [2024-05-15 01:09:36.765513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.765669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.765694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.517 qpair failed and we were unable to recover it. 00:22:24.517 [2024-05-15 01:09:36.765864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.766020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.766046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.517 qpair failed and we were unable to recover it. 00:22:24.517 [2024-05-15 01:09:36.766203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.766393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.517 [2024-05-15 01:09:36.766418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.517 qpair failed and we were unable to recover it. 00:22:24.518 [2024-05-15 01:09:36.766585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.766744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.766768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.518 qpair failed and we were unable to recover it. 00:22:24.518 [2024-05-15 01:09:36.766936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.767122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.767147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.518 qpair failed and we were unable to recover it. 00:22:24.518 [2024-05-15 01:09:36.767311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.767477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.767503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.518 qpair failed and we were unable to recover it. 00:22:24.518 [2024-05-15 01:09:36.767660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.767941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.767967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.518 qpair failed and we were unable to recover it. 00:22:24.518 [2024-05-15 01:09:36.768179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.768363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.768388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.518 qpair failed and we were unable to recover it. 00:22:24.518 [2024-05-15 01:09:36.768579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.768738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.768763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.518 qpair failed and we were unable to recover it. 00:22:24.518 [2024-05-15 01:09:36.768924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.769088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.769114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.518 qpair failed and we were unable to recover it. 00:22:24.518 [2024-05-15 01:09:36.769304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.769487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.769512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.518 qpair failed and we were unable to recover it. 00:22:24.518 [2024-05-15 01:09:36.769694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.769856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.769881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.518 qpair failed and we were unable to recover it. 00:22:24.518 [2024-05-15 01:09:36.770065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.770229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.770256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.518 qpair failed and we were unable to recover it. 00:22:24.518 [2024-05-15 01:09:36.770424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.770610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.770635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.518 qpair failed and we were unable to recover it. 00:22:24.518 [2024-05-15 01:09:36.770790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.770983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.771009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.518 qpair failed and we were unable to recover it. 00:22:24.518 [2024-05-15 01:09:36.771163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.771371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.771396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.518 qpair failed and we were unable to recover it. 00:22:24.518 [2024-05-15 01:09:36.771615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.771775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.771806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.518 qpair failed and we were unable to recover it. 00:22:24.518 [2024-05-15 01:09:36.772079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.772266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.772291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.518 qpair failed and we were unable to recover it. 00:22:24.518 [2024-05-15 01:09:36.772451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.772608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.772633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.518 qpair failed and we were unable to recover it. 00:22:24.518 [2024-05-15 01:09:36.772803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.772965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.772991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.518 qpair failed and we were unable to recover it. 00:22:24.518 [2024-05-15 01:09:36.773154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.773341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.773365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.518 qpair failed and we were unable to recover it. 00:22:24.518 [2024-05-15 01:09:36.773548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.773708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.518 [2024-05-15 01:09:36.773733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.518 qpair failed and we were unable to recover it. 00:22:24.518 [2024-05-15 01:09:36.773918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.774113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.774138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.519 qpair failed and we were unable to recover it. 00:22:24.519 [2024-05-15 01:09:36.774301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.774458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.774483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.519 qpair failed and we were unable to recover it. 00:22:24.519 [2024-05-15 01:09:36.774644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.774798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.774825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.519 qpair failed and we were unable to recover it. 00:22:24.519 [2024-05-15 01:09:36.775011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.775204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.775229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.519 qpair failed and we were unable to recover it. 00:22:24.519 [2024-05-15 01:09:36.775408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.775592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.775621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.519 qpair failed and we were unable to recover it. 00:22:24.519 [2024-05-15 01:09:36.775790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.775943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.775969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.519 qpair failed and we were unable to recover it. 00:22:24.519 [2024-05-15 01:09:36.776150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.776304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.776328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.519 qpair failed and we were unable to recover it. 00:22:24.519 [2024-05-15 01:09:36.776511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.776702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.776728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.519 qpair failed and we were unable to recover it. 00:22:24.519 [2024-05-15 01:09:36.776913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.777084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.777108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.519 qpair failed and we were unable to recover it. 00:22:24.519 [2024-05-15 01:09:36.777293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.777559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.777584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.519 qpair failed and we were unable to recover it. 00:22:24.519 [2024-05-15 01:09:36.777770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.777926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.777964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.519 qpair failed and we were unable to recover it. 00:22:24.519 [2024-05-15 01:09:36.778163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.778314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.778339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.519 qpair failed and we were unable to recover it. 00:22:24.519 [2024-05-15 01:09:36.778493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.778647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.778672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.519 qpair failed and we were unable to recover it. 00:22:24.519 [2024-05-15 01:09:36.778835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.779017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.779062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.519 qpair failed and we were unable to recover it. 00:22:24.519 [2024-05-15 01:09:36.779243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.779432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.779462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.519 qpair failed and we were unable to recover it. 00:22:24.519 [2024-05-15 01:09:36.779677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.779835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.779859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.519 qpair failed and we were unable to recover it. 00:22:24.519 [2024-05-15 01:09:36.780053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.780208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.780233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.519 qpair failed and we were unable to recover it. 00:22:24.519 [2024-05-15 01:09:36.780399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.780582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.780607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.519 qpair failed and we were unable to recover it. 00:22:24.519 [2024-05-15 01:09:36.780785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.780944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.780969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.519 qpair failed and we were unable to recover it. 00:22:24.519 [2024-05-15 01:09:36.781163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.781364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.519 [2024-05-15 01:09:36.781388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.519 qpair failed and we were unable to recover it. 00:22:24.520 [2024-05-15 01:09:36.781588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.520 [2024-05-15 01:09:36.781746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.520 [2024-05-15 01:09:36.781771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.520 qpair failed and we were unable to recover it. 00:22:24.520 [2024-05-15 01:09:36.781959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.520 [2024-05-15 01:09:36.782111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.520 [2024-05-15 01:09:36.782135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.520 qpair failed and we were unable to recover it. 00:22:24.520 [2024-05-15 01:09:36.782326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.520 [2024-05-15 01:09:36.782495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.520 [2024-05-15 01:09:36.782522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.520 qpair failed and we were unable to recover it. 00:22:24.520 [2024-05-15 01:09:36.782695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.520 [2024-05-15 01:09:36.782856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.520 [2024-05-15 01:09:36.782882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.520 qpair failed and we were unable to recover it. 00:22:24.520 [2024-05-15 01:09:36.783038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.520 [2024-05-15 01:09:36.783218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.520 [2024-05-15 01:09:36.783248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.520 qpair failed and we were unable to recover it. 00:22:24.520 [2024-05-15 01:09:36.783430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.520 [2024-05-15 01:09:36.783618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.520 [2024-05-15 01:09:36.783642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.520 qpair failed and we were unable to recover it. 00:22:24.520 [2024-05-15 01:09:36.783796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.520 [2024-05-15 01:09:36.783986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.520 [2024-05-15 01:09:36.784011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.520 qpair failed and we were unable to recover it. 00:22:24.520 [2024-05-15 01:09:36.784163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.520 [2024-05-15 01:09:36.784322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.520 [2024-05-15 01:09:36.784346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.520 qpair failed and we were unable to recover it. 00:22:24.520 [2024-05-15 01:09:36.784550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.520 [2024-05-15 01:09:36.784704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.520 [2024-05-15 01:09:36.784728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.520 qpair failed and we were unable to recover it. 00:22:24.520 [2024-05-15 01:09:36.784901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.520 [2024-05-15 01:09:36.785087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.520 [2024-05-15 01:09:36.785113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.520 qpair failed and we were unable to recover it. 00:22:24.520 [2024-05-15 01:09:36.785275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.520 [2024-05-15 01:09:36.785461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.520 [2024-05-15 01:09:36.785485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.520 qpair failed and we were unable to recover it. 00:22:24.520 [2024-05-15 01:09:36.785643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.520 [2024-05-15 01:09:36.785827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.520 [2024-05-15 01:09:36.785852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.520 qpair failed and we were unable to recover it. 00:22:24.520 [2024-05-15 01:09:36.786016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.520 [2024-05-15 01:09:36.786177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.786204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.521 qpair failed and we were unable to recover it. 00:22:24.521 [2024-05-15 01:09:36.786364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.786523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.786548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.521 qpair failed and we were unable to recover it. 00:22:24.521 [2024-05-15 01:09:36.786763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.786973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.786999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.521 qpair failed and we were unable to recover it. 00:22:24.521 [2024-05-15 01:09:36.787188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.787373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.787398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.521 qpair failed and we were unable to recover it. 00:22:24.521 [2024-05-15 01:09:36.787608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.787792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.787817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.521 qpair failed and we were unable to recover it. 00:22:24.521 [2024-05-15 01:09:36.788001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.788150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.788175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.521 qpair failed and we were unable to recover it. 00:22:24.521 [2024-05-15 01:09:36.788330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.788512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.788536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.521 qpair failed and we were unable to recover it. 00:22:24.521 [2024-05-15 01:09:36.788713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.788918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.788948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.521 qpair failed and we were unable to recover it. 00:22:24.521 [2024-05-15 01:09:36.789137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.789291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.789315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.521 qpair failed and we were unable to recover it. 00:22:24.521 [2024-05-15 01:09:36.789473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.789676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.789700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.521 qpair failed and we were unable to recover it. 00:22:24.521 [2024-05-15 01:09:36.789889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.790076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.790103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.521 qpair failed and we were unable to recover it. 00:22:24.521 [2024-05-15 01:09:36.790289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.790452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.790477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.521 qpair failed and we were unable to recover it. 00:22:24.521 [2024-05-15 01:09:36.790666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.790881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.790905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.521 qpair failed and we were unable to recover it. 00:22:24.521 [2024-05-15 01:09:36.791122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.791326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.791354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.521 qpair failed and we were unable to recover it. 00:22:24.521 [2024-05-15 01:09:36.791525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.791708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.791734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.521 qpair failed and we were unable to recover it. 00:22:24.521 [2024-05-15 01:09:36.791895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.792089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.792116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.521 qpair failed and we were unable to recover it. 00:22:24.521 [2024-05-15 01:09:36.792387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.792567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.792592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.521 qpair failed and we were unable to recover it. 00:22:24.521 [2024-05-15 01:09:36.792755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.792914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.792947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.521 qpair failed and we were unable to recover it. 00:22:24.521 [2024-05-15 01:09:36.793105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.793292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.793318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.521 qpair failed and we were unable to recover it. 00:22:24.521 [2024-05-15 01:09:36.793474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.793742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.521 [2024-05-15 01:09:36.793767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.521 qpair failed and we were unable to recover it. 00:22:24.522 [2024-05-15 01:09:36.793943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.794099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.794124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.522 qpair failed and we were unable to recover it. 00:22:24.522 [2024-05-15 01:09:36.794281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.794444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.794471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.522 qpair failed and we were unable to recover it. 00:22:24.522 [2024-05-15 01:09:36.794652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.794844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.794869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.522 qpair failed and we were unable to recover it. 00:22:24.522 [2024-05-15 01:09:36.795037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.795226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.795251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.522 qpair failed and we were unable to recover it. 00:22:24.522 [2024-05-15 01:09:36.795404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.795565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.795591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.522 qpair failed and we were unable to recover it. 00:22:24.522 [2024-05-15 01:09:36.795768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.795954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.795980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.522 qpair failed and we were unable to recover it. 00:22:24.522 [2024-05-15 01:09:36.796250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.796430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.796455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.522 qpair failed and we were unable to recover it. 00:22:24.522 [2024-05-15 01:09:36.796641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.796827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.796851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.522 qpair failed and we were unable to recover it. 00:22:24.522 [2024-05-15 01:09:36.797009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.797175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.797201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.522 qpair failed and we were unable to recover it. 00:22:24.522 [2024-05-15 01:09:36.797360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.797519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.797543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.522 qpair failed and we were unable to recover it. 00:22:24.522 [2024-05-15 01:09:36.797721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.797904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.797939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.522 qpair failed and we were unable to recover it. 00:22:24.522 [2024-05-15 01:09:36.798099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.798264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.798289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.522 qpair failed and we were unable to recover it. 00:22:24.522 [2024-05-15 01:09:36.798478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.798637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.798664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.522 qpair failed and we were unable to recover it. 00:22:24.522 [2024-05-15 01:09:36.798856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.799038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.799065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.522 qpair failed and we were unable to recover it. 00:22:24.522 [2024-05-15 01:09:36.799223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.799433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.799458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.522 qpair failed and we were unable to recover it. 00:22:24.522 [2024-05-15 01:09:36.799610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.799766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.799790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.522 qpair failed and we were unable to recover it. 00:22:24.522 [2024-05-15 01:09:36.799949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.800117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.800142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.522 qpair failed and we were unable to recover it. 00:22:24.522 [2024-05-15 01:09:36.800325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.800536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.800561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.522 qpair failed and we were unable to recover it. 00:22:24.522 [2024-05-15 01:09:36.800764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.800924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.522 [2024-05-15 01:09:36.800957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.522 qpair failed and we were unable to recover it. 00:22:24.522 [2024-05-15 01:09:36.801144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.801302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.801328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.523 qpair failed and we were unable to recover it. 00:22:24.523 [2024-05-15 01:09:36.801510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.801695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.801720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.523 qpair failed and we were unable to recover it. 00:22:24.523 [2024-05-15 01:09:36.801939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.802125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.802150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.523 qpair failed and we were unable to recover it. 00:22:24.523 [2024-05-15 01:09:36.802312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.802474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.802498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.523 qpair failed and we were unable to recover it. 00:22:24.523 [2024-05-15 01:09:36.802694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.802851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.802878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.523 qpair failed and we were unable to recover it. 00:22:24.523 [2024-05-15 01:09:36.803066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.803219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.803244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.523 qpair failed and we were unable to recover it. 00:22:24.523 [2024-05-15 01:09:36.803427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.803609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.803634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.523 qpair failed and we were unable to recover it. 00:22:24.523 [2024-05-15 01:09:36.803837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.804017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.804043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.523 qpair failed and we were unable to recover it. 00:22:24.523 [2024-05-15 01:09:36.804208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.804368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.804395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.523 qpair failed and we were unable to recover it. 00:22:24.523 [2024-05-15 01:09:36.804583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.804735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.804760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.523 qpair failed and we were unable to recover it. 00:22:24.523 [2024-05-15 01:09:36.804913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.805074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.805099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.523 qpair failed and we were unable to recover it. 00:22:24.523 [2024-05-15 01:09:36.805288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.805439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.805464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.523 qpair failed and we were unable to recover it. 00:22:24.523 [2024-05-15 01:09:36.805641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.805815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.805840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.523 qpair failed and we were unable to recover it. 00:22:24.523 [2024-05-15 01:09:36.806022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.806181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.806208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.523 qpair failed and we were unable to recover it. 00:22:24.523 [2024-05-15 01:09:36.806376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.806569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.806594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.523 qpair failed and we were unable to recover it. 00:22:24.523 [2024-05-15 01:09:36.806749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.806925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.806957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.523 qpair failed and we were unable to recover it. 00:22:24.523 [2024-05-15 01:09:36.807142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.807337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.807362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.523 qpair failed and we were unable to recover it. 00:22:24.523 [2024-05-15 01:09:36.807537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.807697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.807723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.523 qpair failed and we were unable to recover it. 00:22:24.523 [2024-05-15 01:09:36.807881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.808070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.808096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.523 qpair failed and we were unable to recover it. 00:22:24.523 [2024-05-15 01:09:36.808262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.808443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.808468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.523 qpair failed and we were unable to recover it. 00:22:24.523 [2024-05-15 01:09:36.808628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.808786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.808811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.523 qpair failed and we were unable to recover it. 00:22:24.523 [2024-05-15 01:09:36.808994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.809156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.523 [2024-05-15 01:09:36.809181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.523 qpair failed and we were unable to recover it. 00:22:24.524 [2024-05-15 01:09:36.809367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.809526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.809551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.524 qpair failed and we were unable to recover it. 00:22:24.524 [2024-05-15 01:09:36.809703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.809857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.809882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.524 qpair failed and we were unable to recover it. 00:22:24.524 [2024-05-15 01:09:36.810057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.810214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.810239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.524 qpair failed and we were unable to recover it. 00:22:24.524 [2024-05-15 01:09:36.810388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.810548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.810575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.524 qpair failed and we were unable to recover it. 00:22:24.524 [2024-05-15 01:09:36.810757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.810908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.810938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.524 qpair failed and we were unable to recover it. 00:22:24.524 [2024-05-15 01:09:36.811109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.811299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.811324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.524 qpair failed and we were unable to recover it. 00:22:24.524 [2024-05-15 01:09:36.811506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.811670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.811694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.524 qpair failed and we were unable to recover it. 00:22:24.524 [2024-05-15 01:09:36.811884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.812049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.812076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.524 qpair failed and we were unable to recover it. 00:22:24.524 [2024-05-15 01:09:36.812238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.812420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.812445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.524 qpair failed and we were unable to recover it. 00:22:24.524 [2024-05-15 01:09:36.812630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.812785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.812810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.524 qpair failed and we were unable to recover it. 00:22:24.524 [2024-05-15 01:09:36.812979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.813144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.813169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.524 qpair failed and we were unable to recover it. 00:22:24.524 [2024-05-15 01:09:36.813354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.813509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.813536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.524 qpair failed and we were unable to recover it. 00:22:24.524 [2024-05-15 01:09:36.813733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.813890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.813916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.524 qpair failed and we were unable to recover it. 00:22:24.524 [2024-05-15 01:09:36.814083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.814255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.814281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.524 qpair failed and we were unable to recover it. 00:22:24.524 [2024-05-15 01:09:36.814437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.814597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.814621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.524 qpair failed and we were unable to recover it. 00:22:24.524 [2024-05-15 01:09:36.814806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.814987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.815012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.524 qpair failed and we were unable to recover it. 00:22:24.524 [2024-05-15 01:09:36.815211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.815391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.815416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.524 qpair failed and we were unable to recover it. 00:22:24.524 [2024-05-15 01:09:36.815576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.815786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.815811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.524 qpair failed and we were unable to recover it. 00:22:24.524 [2024-05-15 01:09:36.815975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.816163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.816188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.524 qpair failed and we were unable to recover it. 00:22:24.524 [2024-05-15 01:09:36.816349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.816533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.524 [2024-05-15 01:09:36.816558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.524 qpair failed and we were unable to recover it. 00:22:24.524 [2024-05-15 01:09:36.816745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.816939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.816964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.525 qpair failed and we were unable to recover it. 00:22:24.525 [2024-05-15 01:09:36.817128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.817311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.817335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.525 qpair failed and we were unable to recover it. 00:22:24.525 [2024-05-15 01:09:36.817515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.817703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.817727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.525 qpair failed and we were unable to recover it. 00:22:24.525 [2024-05-15 01:09:36.817889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.818076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.818101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.525 qpair failed and we were unable to recover it. 00:22:24.525 [2024-05-15 01:09:36.818264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.818422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.818449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.525 qpair failed and we were unable to recover it. 00:22:24.525 [2024-05-15 01:09:36.818633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.818797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.818824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.525 qpair failed and we were unable to recover it. 00:22:24.525 [2024-05-15 01:09:36.819019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.819203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.819229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.525 qpair failed and we were unable to recover it. 00:22:24.525 [2024-05-15 01:09:36.819440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.819627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.819653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.525 qpair failed and we were unable to recover it. 00:22:24.525 [2024-05-15 01:09:36.819818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.819982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.820007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.525 qpair failed and we were unable to recover it. 00:22:24.525 [2024-05-15 01:09:36.820170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.820349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.820375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.525 qpair failed and we were unable to recover it. 00:22:24.525 [2024-05-15 01:09:36.820556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.820742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.820766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.525 qpair failed and we were unable to recover it. 00:22:24.525 [2024-05-15 01:09:36.820936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.821095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.821122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.525 qpair failed and we were unable to recover it. 00:22:24.525 [2024-05-15 01:09:36.821326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.821491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.821517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.525 qpair failed and we were unable to recover it. 00:22:24.525 [2024-05-15 01:09:36.821704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.821889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.821914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.525 qpair failed and we were unable to recover it. 00:22:24.525 [2024-05-15 01:09:36.822074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.822260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.822285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.525 qpair failed and we were unable to recover it. 00:22:24.525 [2024-05-15 01:09:36.822435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.822591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.822618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.525 qpair failed and we were unable to recover it. 00:22:24.525 [2024-05-15 01:09:36.822776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.822971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.822996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.525 qpair failed and we were unable to recover it. 00:22:24.525 [2024-05-15 01:09:36.823167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.823330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.823355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.525 qpair failed and we were unable to recover it. 00:22:24.525 [2024-05-15 01:09:36.823547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.823708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.823733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.525 qpair failed and we were unable to recover it. 00:22:24.525 [2024-05-15 01:09:36.823915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.824108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.525 [2024-05-15 01:09:36.824133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.525 qpair failed and we were unable to recover it. 00:22:24.526 [2024-05-15 01:09:36.824296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.824518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.824542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.526 qpair failed and we were unable to recover it. 00:22:24.526 [2024-05-15 01:09:36.824724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.824887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.824912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.526 qpair failed and we were unable to recover it. 00:22:24.526 [2024-05-15 01:09:36.825104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.825288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.825313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.526 qpair failed and we were unable to recover it. 00:22:24.526 [2024-05-15 01:09:36.825488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.825660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.825685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.526 qpair failed and we were unable to recover it. 00:22:24.526 [2024-05-15 01:09:36.825853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.826064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.826090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.526 qpair failed and we were unable to recover it. 00:22:24.526 [2024-05-15 01:09:36.826248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.826399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.826424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.526 qpair failed and we were unable to recover it. 00:22:24.526 [2024-05-15 01:09:36.826638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.826846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.826870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.526 qpair failed and we were unable to recover it. 00:22:24.526 [2024-05-15 01:09:36.827031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.827220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.827247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.526 qpair failed and we were unable to recover it. 00:22:24.526 [2024-05-15 01:09:36.827406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.827589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.827614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.526 qpair failed and we were unable to recover it. 00:22:24.526 [2024-05-15 01:09:36.827808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.827993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.828019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.526 qpair failed and we were unable to recover it. 00:22:24.526 [2024-05-15 01:09:36.828188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.828338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.828362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.526 qpair failed and we were unable to recover it. 00:22:24.526 [2024-05-15 01:09:36.828520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.828674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.828699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.526 qpair failed and we were unable to recover it. 00:22:24.526 [2024-05-15 01:09:36.828889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.829059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.829085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.526 qpair failed and we were unable to recover it. 00:22:24.526 [2024-05-15 01:09:36.829273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.829452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.829477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.526 qpair failed and we were unable to recover it. 00:22:24.526 [2024-05-15 01:09:36.829639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.829834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.829859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.526 qpair failed and we were unable to recover it. 00:22:24.526 [2024-05-15 01:09:36.830027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.830238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.830263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.526 qpair failed and we were unable to recover it. 00:22:24.526 [2024-05-15 01:09:36.830423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.830584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.830609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.526 qpair failed and we were unable to recover it. 00:22:24.526 [2024-05-15 01:09:36.830800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.830997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.831024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.526 qpair failed and we were unable to recover it. 00:22:24.526 [2024-05-15 01:09:36.831194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.831383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.831408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.526 qpair failed and we were unable to recover it. 00:22:24.526 [2024-05-15 01:09:36.831595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.526 [2024-05-15 01:09:36.831750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.831774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.527 qpair failed and we were unable to recover it. 00:22:24.527 [2024-05-15 01:09:36.831945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.832122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.832147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.527 qpair failed and we were unable to recover it. 00:22:24.527 [2024-05-15 01:09:36.832359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.832543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.832567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.527 qpair failed and we were unable to recover it. 00:22:24.527 [2024-05-15 01:09:36.832724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.832939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.832971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.527 qpair failed and we were unable to recover it. 00:22:24.527 [2024-05-15 01:09:36.833168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.833377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.833402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.527 qpair failed and we were unable to recover it. 00:22:24.527 [2024-05-15 01:09:36.833585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.833793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.833818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.527 qpair failed and we were unable to recover it. 00:22:24.527 [2024-05-15 01:09:36.834035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.834217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.834241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.527 qpair failed and we were unable to recover it. 00:22:24.527 [2024-05-15 01:09:36.834426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.834622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.834647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.527 qpair failed and we were unable to recover it. 00:22:24.527 [2024-05-15 01:09:36.834840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.835023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.835049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.527 qpair failed and we were unable to recover it. 00:22:24.527 [2024-05-15 01:09:36.835214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.835401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.835426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.527 qpair failed and we were unable to recover it. 00:22:24.527 [2024-05-15 01:09:36.835592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.835746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.835771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.527 qpair failed and we were unable to recover it. 00:22:24.527 [2024-05-15 01:09:36.835934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.836103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.836129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.527 qpair failed and we were unable to recover it. 00:22:24.527 [2024-05-15 01:09:36.836285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.836466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.836491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.527 qpair failed and we were unable to recover it. 00:22:24.527 [2024-05-15 01:09:36.836672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.836852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.836882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.527 qpair failed and we were unable to recover it. 00:22:24.527 [2024-05-15 01:09:36.837049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.837211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.837237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.527 qpair failed and we were unable to recover it. 00:22:24.527 [2024-05-15 01:09:36.837431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.837611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.837636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.527 qpair failed and we were unable to recover it. 00:22:24.527 [2024-05-15 01:09:36.837819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.837983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.838009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.527 qpair failed and we were unable to recover it. 00:22:24.527 [2024-05-15 01:09:36.838205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.838370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.838395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.527 qpair failed and we were unable to recover it. 00:22:24.527 [2024-05-15 01:09:36.838555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.838712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.838738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.527 qpair failed and we were unable to recover it. 00:22:24.527 [2024-05-15 01:09:36.838923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.839080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.839105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.527 qpair failed and we were unable to recover it. 00:22:24.527 [2024-05-15 01:09:36.839299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.839456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.839481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.527 qpair failed and we were unable to recover it. 00:22:24.527 [2024-05-15 01:09:36.839662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.839821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.839846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.527 qpair failed and we were unable to recover it. 00:22:24.527 [2024-05-15 01:09:36.840014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.840209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.840234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.527 qpair failed and we were unable to recover it. 00:22:24.527 [2024-05-15 01:09:36.840419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.840581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.840611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.527 qpair failed and we were unable to recover it. 00:22:24.527 [2024-05-15 01:09:36.840794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.840958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.840984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.527 qpair failed and we were unable to recover it. 00:22:24.527 [2024-05-15 01:09:36.841143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.841336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.841361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.527 qpair failed and we were unable to recover it. 00:22:24.527 [2024-05-15 01:09:36.841545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.841704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.527 [2024-05-15 01:09:36.841728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.527 qpair failed and we were unable to recover it. 00:22:24.527 [2024-05-15 01:09:36.841920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.842122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.842149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.528 qpair failed and we were unable to recover it. 00:22:24.528 [2024-05-15 01:09:36.842314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.842471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.842496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.528 qpair failed and we were unable to recover it. 00:22:24.528 [2024-05-15 01:09:36.842661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.842874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.842900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.528 qpair failed and we were unable to recover it. 00:22:24.528 [2024-05-15 01:09:36.843072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.843254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.843280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.528 qpair failed and we were unable to recover it. 00:22:24.528 [2024-05-15 01:09:36.843465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.843636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.843663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.528 qpair failed and we were unable to recover it. 00:22:24.528 [2024-05-15 01:09:36.843824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.844001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.844028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.528 qpair failed and we were unable to recover it. 00:22:24.528 [2024-05-15 01:09:36.844219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.844385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.844415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.528 qpair failed and we were unable to recover it. 00:22:24.528 [2024-05-15 01:09:36.844569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.844777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.844802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.528 qpair failed and we were unable to recover it. 00:22:24.528 [2024-05-15 01:09:36.845019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.845201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.845226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.528 qpair failed and we were unable to recover it. 00:22:24.528 [2024-05-15 01:09:36.845418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.845597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.845622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.528 qpair failed and we were unable to recover it. 00:22:24.528 [2024-05-15 01:09:36.845784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.845944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.845970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.528 qpair failed and we were unable to recover it. 00:22:24.528 [2024-05-15 01:09:36.846134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.846293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.846319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.528 qpair failed and we were unable to recover it. 00:22:24.528 [2024-05-15 01:09:36.846508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.846716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.846741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.528 qpair failed and we were unable to recover it. 00:22:24.528 [2024-05-15 01:09:36.846901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.847093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.847118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.528 qpair failed and we were unable to recover it. 00:22:24.528 [2024-05-15 01:09:36.847295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.847450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.847476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.528 qpair failed and we were unable to recover it. 00:22:24.528 [2024-05-15 01:09:36.847674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.847838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.847864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.528 qpair failed and we were unable to recover it. 00:22:24.528 [2024-05-15 01:09:36.848027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.848222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.848247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.528 qpair failed and we were unable to recover it. 00:22:24.528 [2024-05-15 01:09:36.848410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.848568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.848593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.528 qpair failed and we were unable to recover it. 00:22:24.528 [2024-05-15 01:09:36.848775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.848939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.848964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.528 qpair failed and we were unable to recover it. 00:22:24.528 [2024-05-15 01:09:36.849122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.849305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.849331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.528 qpair failed and we were unable to recover it. 00:22:24.528 [2024-05-15 01:09:36.849513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.849671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.849695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.528 qpair failed and we were unable to recover it. 00:22:24.528 [2024-05-15 01:09:36.849858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.850014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.850040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.528 qpair failed and we were unable to recover it. 00:22:24.528 [2024-05-15 01:09:36.850193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.850405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.850430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.528 qpair failed and we were unable to recover it. 00:22:24.528 [2024-05-15 01:09:36.850598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.850785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.850811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.528 qpair failed and we were unable to recover it. 00:22:24.528 [2024-05-15 01:09:36.850993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.851150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.851176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.528 qpair failed and we were unable to recover it. 00:22:24.528 [2024-05-15 01:09:36.851365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.851542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.528 [2024-05-15 01:09:36.851567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.528 qpair failed and we were unable to recover it. 00:22:24.528 [2024-05-15 01:09:36.851749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.851936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.851963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.529 qpair failed and we were unable to recover it. 00:22:24.529 [2024-05-15 01:09:36.852155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.852342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.852367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.529 qpair failed and we were unable to recover it. 00:22:24.529 [2024-05-15 01:09:36.852523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.852701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.852726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.529 qpair failed and we were unable to recover it. 00:22:24.529 [2024-05-15 01:09:36.852894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.853091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.853117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.529 qpair failed and we were unable to recover it. 00:22:24.529 [2024-05-15 01:09:36.853275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.853461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.853486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.529 qpair failed and we were unable to recover it. 00:22:24.529 [2024-05-15 01:09:36.853652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.853815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.853840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.529 qpair failed and we were unable to recover it. 00:22:24.529 [2024-05-15 01:09:36.854007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.854170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.854197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.529 qpair failed and we were unable to recover it. 00:22:24.529 [2024-05-15 01:09:36.854405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.854593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.854618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.529 qpair failed and we were unable to recover it. 00:22:24.529 [2024-05-15 01:09:36.854785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.854946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.854971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.529 qpair failed and we were unable to recover it. 00:22:24.529 [2024-05-15 01:09:36.855160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.855319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.855344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.529 qpair failed and we were unable to recover it. 00:22:24.529 [2024-05-15 01:09:36.855504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.855712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.855736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.529 qpair failed and we were unable to recover it. 00:22:24.529 [2024-05-15 01:09:36.855917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.856116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.856143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.529 qpair failed and we were unable to recover it. 00:22:24.529 [2024-05-15 01:09:36.856359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.856523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.856548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.529 qpair failed and we were unable to recover it. 00:22:24.529 [2024-05-15 01:09:36.856705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.856895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.856920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.529 qpair failed and we were unable to recover it. 00:22:24.529 [2024-05-15 01:09:36.857089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.857282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.857308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.529 qpair failed and we were unable to recover it. 00:22:24.529 [2024-05-15 01:09:36.857462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.857668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.857692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.529 qpair failed and we were unable to recover it. 00:22:24.529 [2024-05-15 01:09:36.857880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.858043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.858069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.529 qpair failed and we were unable to recover it. 00:22:24.529 [2024-05-15 01:09:36.858229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.858421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.858446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.529 qpair failed and we were unable to recover it. 00:22:24.529 [2024-05-15 01:09:36.858633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.858815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.858840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.529 qpair failed and we were unable to recover it. 00:22:24.529 [2024-05-15 01:09:36.858998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.859167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.859193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.529 qpair failed and we were unable to recover it. 00:22:24.529 [2024-05-15 01:09:36.859354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.859543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.859568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.529 qpair failed and we were unable to recover it. 00:22:24.529 [2024-05-15 01:09:36.859737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.859889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.859914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.529 qpair failed and we were unable to recover it. 00:22:24.529 [2024-05-15 01:09:36.860119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.860299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.529 [2024-05-15 01:09:36.860325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.530 qpair failed and we were unable to recover it. 00:22:24.530 [2024-05-15 01:09:36.860544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.860723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.860747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.530 qpair failed and we were unable to recover it. 00:22:24.530 [2024-05-15 01:09:36.860959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.861116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.861141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.530 qpair failed and we were unable to recover it. 00:22:24.530 [2024-05-15 01:09:36.861334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.861491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.861516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.530 qpair failed and we were unable to recover it. 00:22:24.530 [2024-05-15 01:09:36.861704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.861890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.861914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.530 qpair failed and we were unable to recover it. 00:22:24.530 [2024-05-15 01:09:36.862084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.862243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.862268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.530 qpair failed and we were unable to recover it. 00:22:24.530 [2024-05-15 01:09:36.862475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.862634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.862658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.530 qpair failed and we were unable to recover it. 00:22:24.530 [2024-05-15 01:09:36.862837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.863008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.863034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.530 qpair failed and we were unable to recover it. 00:22:24.530 [2024-05-15 01:09:36.863220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.863399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.863425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.530 qpair failed and we were unable to recover it. 00:22:24.530 [2024-05-15 01:09:36.863595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.863758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.863783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.530 qpair failed and we were unable to recover it. 00:22:24.530 [2024-05-15 01:09:36.863952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.864110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.864138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.530 qpair failed and we were unable to recover it. 00:22:24.530 [2024-05-15 01:09:36.864332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.864515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.864540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.530 qpair failed and we were unable to recover it. 00:22:24.530 [2024-05-15 01:09:36.864699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.864849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.864874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.530 qpair failed and we were unable to recover it. 00:22:24.530 [2024-05-15 01:09:36.865092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.865250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.865275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.530 qpair failed and we were unable to recover it. 00:22:24.530 [2024-05-15 01:09:36.865457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.865642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.865667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.530 qpair failed and we were unable to recover it. 00:22:24.530 [2024-05-15 01:09:36.865848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.866023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.866049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.530 qpair failed and we were unable to recover it. 00:22:24.530 [2024-05-15 01:09:36.866213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.866372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.866397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.530 qpair failed and we were unable to recover it. 00:22:24.530 [2024-05-15 01:09:36.866579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.866787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.866811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.530 qpair failed and we were unable to recover it. 00:22:24.530 [2024-05-15 01:09:36.866968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.867157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.867182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.530 qpair failed and we were unable to recover it. 00:22:24.530 [2024-05-15 01:09:36.867372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.867534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.530 [2024-05-15 01:09:36.867558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.530 qpair failed and we were unable to recover it. 00:22:24.531 [2024-05-15 01:09:36.867715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.531 [2024-05-15 01:09:36.867898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.531 [2024-05-15 01:09:36.867922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.531 qpair failed and we were unable to recover it. 00:22:24.531 [2024-05-15 01:09:36.868129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.531 [2024-05-15 01:09:36.868311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.531 [2024-05-15 01:09:36.868335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.531 qpair failed and we were unable to recover it. 00:22:24.531 [2024-05-15 01:09:36.868510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.531 [2024-05-15 01:09:36.868672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.531 [2024-05-15 01:09:36.868697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.531 qpair failed and we were unable to recover it. 00:22:24.531 [2024-05-15 01:09:36.868880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.531 [2024-05-15 01:09:36.869095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.531 [2024-05-15 01:09:36.869120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.531 qpair failed and we were unable to recover it. 00:22:24.531 [2024-05-15 01:09:36.869301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.531 [2024-05-15 01:09:36.869491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.531 [2024-05-15 01:09:36.869517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.531 qpair failed and we were unable to recover it. 00:22:24.531 [2024-05-15 01:09:36.869690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.531 [2024-05-15 01:09:36.869846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.531 [2024-05-15 01:09:36.869873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.531 qpair failed and we were unable to recover it. 00:22:24.531 [2024-05-15 01:09:36.870041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.531 [2024-05-15 01:09:36.870196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.531 [2024-05-15 01:09:36.870221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.531 qpair failed and we were unable to recover it. 00:22:24.531 [2024-05-15 01:09:36.870410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.531 [2024-05-15 01:09:36.870603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.531 [2024-05-15 01:09:36.870628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.531 qpair failed and we were unable to recover it. 00:22:24.531 [2024-05-15 01:09:36.870864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.531 [2024-05-15 01:09:36.871030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.531 [2024-05-15 01:09:36.871056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.531 qpair failed and we were unable to recover it. 00:22:24.531 [2024-05-15 01:09:36.871267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.531 [2024-05-15 01:09:36.871468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.531 [2024-05-15 01:09:36.871496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.531 qpair failed and we were unable to recover it. 00:22:24.531 [2024-05-15 01:09:36.871673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.531 [2024-05-15 01:09:36.871859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.531 [2024-05-15 01:09:36.871883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.531 qpair failed and we were unable to recover it. 00:22:24.531 [2024-05-15 01:09:36.872041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.531 [2024-05-15 01:09:36.872215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.531 [2024-05-15 01:09:36.872240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.531 qpair failed and we were unable to recover it. 00:22:24.531 [2024-05-15 01:09:36.872423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.531 [2024-05-15 01:09:36.872603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.531 [2024-05-15 01:09:36.872627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.531 qpair failed and we were unable to recover it. 00:22:24.531 [2024-05-15 01:09:36.872794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.531 [2024-05-15 01:09:36.872977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.531 [2024-05-15 01:09:36.873003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.531 qpair failed and we were unable to recover it. 00:22:24.531 [2024-05-15 01:09:36.873168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.873333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.873358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.813 qpair failed and we were unable to recover it. 00:22:24.813 [2024-05-15 01:09:36.873550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.873715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.873740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.813 qpair failed and we were unable to recover it. 00:22:24.813 [2024-05-15 01:09:36.873923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.874101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.874128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.813 qpair failed and we were unable to recover it. 00:22:24.813 [2024-05-15 01:09:36.874297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.874488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.874514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.813 qpair failed and we were unable to recover it. 00:22:24.813 [2024-05-15 01:09:36.874669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.874825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.874851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.813 qpair failed and we were unable to recover it. 00:22:24.813 [2024-05-15 01:09:36.875020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.875208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.875234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.813 qpair failed and we were unable to recover it. 00:22:24.813 [2024-05-15 01:09:36.875411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.875579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.875605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.813 qpair failed and we were unable to recover it. 00:22:24.813 [2024-05-15 01:09:36.875767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.875958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.875983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.813 qpair failed and we were unable to recover it. 00:22:24.813 [2024-05-15 01:09:36.876152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.876311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.876337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.813 qpair failed and we were unable to recover it. 00:22:24.813 [2024-05-15 01:09:36.876506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.876664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.876688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.813 qpair failed and we were unable to recover it. 00:22:24.813 [2024-05-15 01:09:36.876877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.877056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.877081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.813 qpair failed and we were unable to recover it. 00:22:24.813 [2024-05-15 01:09:36.877237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.877397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.877423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.813 qpair failed and we were unable to recover it. 00:22:24.813 [2024-05-15 01:09:36.877605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.877795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.877819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.813 qpair failed and we were unable to recover it. 00:22:24.813 [2024-05-15 01:09:36.877977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.878161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.878185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.813 qpair failed and we were unable to recover it. 00:22:24.813 [2024-05-15 01:09:36.878341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.878516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.878541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.813 qpair failed and we were unable to recover it. 00:22:24.813 [2024-05-15 01:09:36.878720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.878925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.878961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.813 qpair failed and we were unable to recover it. 00:22:24.813 [2024-05-15 01:09:36.879148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.813 [2024-05-15 01:09:36.879338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.879362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.814 qpair failed and we were unable to recover it. 00:22:24.814 [2024-05-15 01:09:36.879557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.879721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.879747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.814 qpair failed and we were unable to recover it. 00:22:24.814 [2024-05-15 01:09:36.879937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.880113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.880138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.814 qpair failed and we were unable to recover it. 00:22:24.814 [2024-05-15 01:09:36.880319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.880493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.880517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.814 qpair failed and we were unable to recover it. 00:22:24.814 [2024-05-15 01:09:36.880711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.880872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.880896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.814 qpair failed and we were unable to recover it. 00:22:24.814 [2024-05-15 01:09:36.881097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.881261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.881285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.814 qpair failed and we were unable to recover it. 00:22:24.814 [2024-05-15 01:09:36.881495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.881656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.881682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.814 qpair failed and we were unable to recover it. 00:22:24.814 [2024-05-15 01:09:36.881897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.882057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.882082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.814 qpair failed and we were unable to recover it. 00:22:24.814 [2024-05-15 01:09:36.882276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.882466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.882490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.814 qpair failed and we were unable to recover it. 00:22:24.814 [2024-05-15 01:09:36.882678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.882839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.882868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.814 qpair failed and we were unable to recover it. 00:22:24.814 [2024-05-15 01:09:36.883026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.883219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.883245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.814 qpair failed and we were unable to recover it. 00:22:24.814 [2024-05-15 01:09:36.883401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.883590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.883615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.814 qpair failed and we were unable to recover it. 00:22:24.814 [2024-05-15 01:09:36.883800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.883987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.884012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.814 qpair failed and we were unable to recover it. 00:22:24.814 [2024-05-15 01:09:36.884181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.884333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.884357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.814 qpair failed and we were unable to recover it. 00:22:24.814 [2024-05-15 01:09:36.884530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.884696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.884720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.814 qpair failed and we were unable to recover it. 00:22:24.814 [2024-05-15 01:09:36.884901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.885069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.885095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.814 qpair failed and we were unable to recover it. 00:22:24.814 [2024-05-15 01:09:36.885253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.885417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.885441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.814 qpair failed and we were unable to recover it. 00:22:24.814 [2024-05-15 01:09:36.885597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.885761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.885785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.814 qpair failed and we were unable to recover it. 00:22:24.814 [2024-05-15 01:09:36.885958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.886152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.886177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.814 qpair failed and we were unable to recover it. 00:22:24.814 [2024-05-15 01:09:36.886329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.886491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.886516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.814 qpair failed and we were unable to recover it. 00:22:24.814 [2024-05-15 01:09:36.886687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.886874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.886898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.814 qpair failed and we were unable to recover it. 00:22:24.814 [2024-05-15 01:09:36.887114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.887282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.887307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.814 qpair failed and we were unable to recover it. 00:22:24.814 [2024-05-15 01:09:36.887460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.887616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.887641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.814 qpair failed and we were unable to recover it. 00:22:24.814 [2024-05-15 01:09:36.887810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.887989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.888015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.814 qpair failed and we were unable to recover it. 00:22:24.814 [2024-05-15 01:09:36.888241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.888407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.888431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.814 qpair failed and we were unable to recover it. 00:22:24.814 [2024-05-15 01:09:36.888622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.888808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.888834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.814 qpair failed and we were unable to recover it. 00:22:24.814 [2024-05-15 01:09:36.889023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.889209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.889234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.814 qpair failed and we were unable to recover it. 00:22:24.814 [2024-05-15 01:09:36.889397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.889556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.889581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.814 qpair failed and we were unable to recover it. 00:22:24.814 [2024-05-15 01:09:36.889767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.889952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.889977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.814 qpair failed and we were unable to recover it. 00:22:24.814 [2024-05-15 01:09:36.890136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.890321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.814 [2024-05-15 01:09:36.890346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.814 qpair failed and we were unable to recover it. 00:22:24.814 [2024-05-15 01:09:36.890534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.890740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.890764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.815 qpair failed and we were unable to recover it. 00:22:24.815 [2024-05-15 01:09:36.890953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.891110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.891134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.815 qpair failed and we were unable to recover it. 00:22:24.815 [2024-05-15 01:09:36.891320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.891470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.891495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.815 qpair failed and we were unable to recover it. 00:22:24.815 [2024-05-15 01:09:36.891689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.891850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.891874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.815 qpair failed and we were unable to recover it. 00:22:24.815 [2024-05-15 01:09:36.892068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.892229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.892253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.815 qpair failed and we were unable to recover it. 00:22:24.815 [2024-05-15 01:09:36.892432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.892644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.892669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.815 qpair failed and we were unable to recover it. 00:22:24.815 [2024-05-15 01:09:36.892848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.893030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.893055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.815 qpair failed and we were unable to recover it. 00:22:24.815 [2024-05-15 01:09:36.893218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.893374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.893399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.815 qpair failed and we were unable to recover it. 00:22:24.815 [2024-05-15 01:09:36.893579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.893741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.893765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.815 qpair failed and we were unable to recover it. 00:22:24.815 [2024-05-15 01:09:36.893948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.894106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.894132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.815 qpair failed and we were unable to recover it. 00:22:24.815 [2024-05-15 01:09:36.894284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.894470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.894494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.815 qpair failed and we were unable to recover it. 00:22:24.815 [2024-05-15 01:09:36.894664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.894848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.894872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.815 qpair failed and we were unable to recover it. 00:22:24.815 [2024-05-15 01:09:36.895054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.895233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.895257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.815 qpair failed and we were unable to recover it. 00:22:24.815 [2024-05-15 01:09:36.895478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.895641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.895665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.815 qpair failed and we were unable to recover it. 00:22:24.815 [2024-05-15 01:09:36.895830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.895994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.896019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.815 qpair failed and we were unable to recover it. 00:22:24.815 [2024-05-15 01:09:36.896209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.896364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.896388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.815 qpair failed and we were unable to recover it. 00:22:24.815 [2024-05-15 01:09:36.896543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.896722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.896746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.815 qpair failed and we were unable to recover it. 00:22:24.815 [2024-05-15 01:09:36.896935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.897140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.897164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.815 qpair failed and we were unable to recover it. 00:22:24.815 [2024-05-15 01:09:36.897343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.897561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.897585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.815 qpair failed and we were unable to recover it. 00:22:24.815 [2024-05-15 01:09:36.897749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.897934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.897959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.815 qpair failed and we were unable to recover it. 00:22:24.815 [2024-05-15 01:09:36.898110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.898271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.898298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.815 qpair failed and we were unable to recover it. 00:22:24.815 [2024-05-15 01:09:36.898488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.898643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.898668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.815 qpair failed and we were unable to recover it. 00:22:24.815 [2024-05-15 01:09:36.898833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.899003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.899029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.815 qpair failed and we were unable to recover it. 00:22:24.815 [2024-05-15 01:09:36.899192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.899360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.899387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.815 qpair failed and we were unable to recover it. 00:22:24.815 [2024-05-15 01:09:36.899576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.899731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.899755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.815 qpair failed and we were unable to recover it. 00:22:24.815 [2024-05-15 01:09:36.899909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.900074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.900101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.815 qpair failed and we were unable to recover it. 00:22:24.815 [2024-05-15 01:09:36.900272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.900437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.900462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.815 qpair failed and we were unable to recover it. 00:22:24.815 [2024-05-15 01:09:36.900673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.900882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.900907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.815 qpair failed and we were unable to recover it. 00:22:24.815 [2024-05-15 01:09:36.901106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.901261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.901286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.815 qpair failed and we were unable to recover it. 00:22:24.815 [2024-05-15 01:09:36.901447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.815 [2024-05-15 01:09:36.901631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.901655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.816 qpair failed and we were unable to recover it. 00:22:24.816 [2024-05-15 01:09:36.901831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.902017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.902047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.816 qpair failed and we were unable to recover it. 00:22:24.816 [2024-05-15 01:09:36.902244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.902394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.902419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.816 qpair failed and we were unable to recover it. 00:22:24.816 [2024-05-15 01:09:36.902583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.902741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.902766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.816 qpair failed and we were unable to recover it. 00:22:24.816 [2024-05-15 01:09:36.902938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.903101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.903129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.816 qpair failed and we were unable to recover it. 00:22:24.816 [2024-05-15 01:09:36.903289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.903478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.903504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.816 qpair failed and we were unable to recover it. 00:22:24.816 [2024-05-15 01:09:36.903693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.903879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.903903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.816 qpair failed and we were unable to recover it. 00:22:24.816 [2024-05-15 01:09:36.904079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.904267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.904292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.816 qpair failed and we were unable to recover it. 00:22:24.816 [2024-05-15 01:09:36.904451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.904634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.904659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.816 qpair failed and we were unable to recover it. 00:22:24.816 [2024-05-15 01:09:36.904816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.905006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.905032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.816 qpair failed and we were unable to recover it. 00:22:24.816 [2024-05-15 01:09:36.905214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.905370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.905395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.816 qpair failed and we were unable to recover it. 00:22:24.816 [2024-05-15 01:09:36.905550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.905715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.905740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.816 qpair failed and we were unable to recover it. 00:22:24.816 [2024-05-15 01:09:36.905941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.906106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.906132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.816 qpair failed and we were unable to recover it. 00:22:24.816 [2024-05-15 01:09:36.906310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.906489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.906514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.816 qpair failed and we were unable to recover it. 00:22:24.816 [2024-05-15 01:09:36.906673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.906849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.906873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.816 qpair failed and we were unable to recover it. 00:22:24.816 [2024-05-15 01:09:36.907033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.907220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.907245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.816 qpair failed and we were unable to recover it. 00:22:24.816 [2024-05-15 01:09:36.907439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.907615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.907640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.816 qpair failed and we were unable to recover it. 00:22:24.816 [2024-05-15 01:09:36.907799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.907954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.907979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.816 qpair failed and we were unable to recover it. 00:22:24.816 [2024-05-15 01:09:36.908196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.908363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.908388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.816 qpair failed and we were unable to recover it. 00:22:24.816 [2024-05-15 01:09:36.908574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.908735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.908761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.816 qpair failed and we were unable to recover it. 00:22:24.816 [2024-05-15 01:09:36.908937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.909100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.909124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.816 qpair failed and we were unable to recover it. 00:22:24.816 [2024-05-15 01:09:36.909310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.909491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.909516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.816 qpair failed and we were unable to recover it. 00:22:24.816 [2024-05-15 01:09:36.909696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.909883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.909909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.816 qpair failed and we were unable to recover it. 00:22:24.816 [2024-05-15 01:09:36.910105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.910297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.910327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.816 qpair failed and we were unable to recover it. 00:22:24.816 [2024-05-15 01:09:36.910548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.910708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.910733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.816 qpair failed and we were unable to recover it. 00:22:24.816 [2024-05-15 01:09:36.910896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.911101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.911127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.816 qpair failed and we were unable to recover it. 00:22:24.816 [2024-05-15 01:09:36.911315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.911515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.911539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.816 qpair failed and we were unable to recover it. 00:22:24.816 [2024-05-15 01:09:36.911724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.911889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.911914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.816 qpair failed and we were unable to recover it. 00:22:24.816 [2024-05-15 01:09:36.912102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.912310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.912335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.816 qpair failed and we were unable to recover it. 00:22:24.816 [2024-05-15 01:09:36.912501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.912694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.816 [2024-05-15 01:09:36.912721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.816 qpair failed and we were unable to recover it. 00:22:24.816 [2024-05-15 01:09:36.912893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.913081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.913108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.817 qpair failed and we were unable to recover it. 00:22:24.817 [2024-05-15 01:09:36.913293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.913483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.913508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.817 qpair failed and we were unable to recover it. 00:22:24.817 [2024-05-15 01:09:36.913702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.913864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.913890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.817 qpair failed and we were unable to recover it. 00:22:24.817 [2024-05-15 01:09:36.914064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.914228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.914253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.817 qpair failed and we were unable to recover it. 00:22:24.817 [2024-05-15 01:09:36.914416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.914601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.914626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.817 qpair failed and we were unable to recover it. 00:22:24.817 [2024-05-15 01:09:36.914784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.914946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.914972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.817 qpair failed and we were unable to recover it. 00:22:24.817 [2024-05-15 01:09:36.915145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.915301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.915328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.817 qpair failed and we were unable to recover it. 00:22:24.817 [2024-05-15 01:09:36.915524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.915718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.915744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.817 qpair failed and we were unable to recover it. 00:22:24.817 [2024-05-15 01:09:36.915901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.916064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.916089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.817 qpair failed and we were unable to recover it. 00:22:24.817 [2024-05-15 01:09:36.916248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.916418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.916443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.817 qpair failed and we were unable to recover it. 00:22:24.817 [2024-05-15 01:09:36.916603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.916779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.916804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.817 qpair failed and we were unable to recover it. 00:22:24.817 [2024-05-15 01:09:36.917039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.917207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.917232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.817 qpair failed and we were unable to recover it. 00:22:24.817 [2024-05-15 01:09:36.917430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.917608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.917633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.817 qpair failed and we were unable to recover it. 00:22:24.817 [2024-05-15 01:09:36.917815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.917974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.918001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.817 qpair failed and we were unable to recover it. 00:22:24.817 [2024-05-15 01:09:36.918163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.918320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.918345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.817 qpair failed and we were unable to recover it. 00:22:24.817 [2024-05-15 01:09:36.918537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.918722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.918747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.817 qpair failed and we were unable to recover it. 00:22:24.817 [2024-05-15 01:09:36.918927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.919104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.919130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.817 qpair failed and we were unable to recover it. 00:22:24.817 [2024-05-15 01:09:36.919322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.919482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.919509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.817 qpair failed and we were unable to recover it. 00:22:24.817 [2024-05-15 01:09:36.919676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.919832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.919856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.817 qpair failed and we were unable to recover it. 00:22:24.817 [2024-05-15 01:09:36.920047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.920205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.920230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.817 qpair failed and we were unable to recover it. 00:22:24.817 [2024-05-15 01:09:36.920405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.920598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.920622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.817 qpair failed and we were unable to recover it. 00:22:24.817 [2024-05-15 01:09:36.920810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.920974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.921001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.817 qpair failed and we were unable to recover it. 00:22:24.817 [2024-05-15 01:09:36.921191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.921355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.921380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.817 qpair failed and we were unable to recover it. 00:22:24.817 [2024-05-15 01:09:36.921591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.817 [2024-05-15 01:09:36.921770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.921795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.818 qpair failed and we were unable to recover it. 00:22:24.818 [2024-05-15 01:09:36.921978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.922179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.922203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.818 qpair failed and we were unable to recover it. 00:22:24.818 [2024-05-15 01:09:36.922366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.922532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.922559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.818 qpair failed and we were unable to recover it. 00:22:24.818 [2024-05-15 01:09:36.922737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.922900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.922926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.818 qpair failed and we were unable to recover it. 00:22:24.818 [2024-05-15 01:09:36.923088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.923247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.923272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.818 qpair failed and we were unable to recover it. 00:22:24.818 [2024-05-15 01:09:36.923450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.923615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.923640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.818 qpair failed and we were unable to recover it. 00:22:24.818 [2024-05-15 01:09:36.923794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.923960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.923986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.818 qpair failed and we were unable to recover it. 00:22:24.818 [2024-05-15 01:09:36.924159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.924320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.924345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.818 qpair failed and we were unable to recover it. 00:22:24.818 [2024-05-15 01:09:36.924504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.924665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.924693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.818 qpair failed and we were unable to recover it. 00:22:24.818 [2024-05-15 01:09:36.924861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.925016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.925041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.818 qpair failed and we were unable to recover it. 00:22:24.818 [2024-05-15 01:09:36.925218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.925413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.925440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.818 qpair failed and we were unable to recover it. 00:22:24.818 [2024-05-15 01:09:36.925632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.925812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.925837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.818 qpair failed and we were unable to recover it. 00:22:24.818 [2024-05-15 01:09:36.926001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.926194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.926220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.818 qpair failed and we were unable to recover it. 00:22:24.818 [2024-05-15 01:09:36.926383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.926542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.926567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.818 qpair failed and we were unable to recover it. 00:22:24.818 [2024-05-15 01:09:36.926739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.926925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.926956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.818 qpair failed and we were unable to recover it. 00:22:24.818 [2024-05-15 01:09:36.927125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.927300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.927325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.818 qpair failed and we were unable to recover it. 00:22:24.818 [2024-05-15 01:09:36.927481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.927673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.927698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.818 qpair failed and we were unable to recover it. 00:22:24.818 [2024-05-15 01:09:36.927859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.928026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.928052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.818 qpair failed and we were unable to recover it. 00:22:24.818 [2024-05-15 01:09:36.928232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.928415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.928442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.818 qpair failed and we were unable to recover it. 00:22:24.818 [2024-05-15 01:09:36.928619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.928812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.928837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.818 qpair failed and we were unable to recover it. 00:22:24.818 [2024-05-15 01:09:36.929031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.929218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.929243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.818 qpair failed and we were unable to recover it. 00:22:24.818 [2024-05-15 01:09:36.929403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.929572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.929597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.818 qpair failed and we were unable to recover it. 00:22:24.818 [2024-05-15 01:09:36.929763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.929926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.929957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.818 qpair failed and we were unable to recover it. 00:22:24.818 [2024-05-15 01:09:36.930112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.930283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.930307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.818 qpair failed and we were unable to recover it. 00:22:24.818 [2024-05-15 01:09:36.930476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.818 [2024-05-15 01:09:36.930661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.930685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.819 qpair failed and we were unable to recover it. 00:22:24.819 [2024-05-15 01:09:36.930867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.931049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.931075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.819 qpair failed and we were unable to recover it. 00:22:24.819 [2024-05-15 01:09:36.931242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.931400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.931425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.819 qpair failed and we were unable to recover it. 00:22:24.819 [2024-05-15 01:09:36.931620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.931778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.931803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.819 qpair failed and we were unable to recover it. 00:22:24.819 [2024-05-15 01:09:36.931991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.932154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.932178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.819 qpair failed and we were unable to recover it. 00:22:24.819 [2024-05-15 01:09:36.932345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.932514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.932542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.819 qpair failed and we were unable to recover it. 00:22:24.819 [2024-05-15 01:09:36.932710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.932898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.932923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.819 qpair failed and we were unable to recover it. 00:22:24.819 [2024-05-15 01:09:36.933085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.933247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.933272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.819 qpair failed and we were unable to recover it. 00:22:24.819 [2024-05-15 01:09:36.933452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.933615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.933640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.819 qpair failed and we were unable to recover it. 00:22:24.819 [2024-05-15 01:09:36.933828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.933989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.934014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.819 qpair failed and we were unable to recover it. 00:22:24.819 [2024-05-15 01:09:36.934182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.934349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.934375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.819 qpair failed and we were unable to recover it. 00:22:24.819 [2024-05-15 01:09:36.934560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.934728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.934752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.819 qpair failed and we were unable to recover it. 00:22:24.819 [2024-05-15 01:09:36.934910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.935081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.935109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.819 qpair failed and we were unable to recover it. 00:22:24.819 [2024-05-15 01:09:36.935303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.935483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.935509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.819 qpair failed and we were unable to recover it. 00:22:24.819 [2024-05-15 01:09:36.935676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.935859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.935884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.819 qpair failed and we were unable to recover it. 00:22:24.819 [2024-05-15 01:09:36.936083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.936270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.936300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.819 qpair failed and we were unable to recover it. 00:22:24.819 [2024-05-15 01:09:36.936462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.936618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.936643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.819 qpair failed and we were unable to recover it. 00:22:24.819 [2024-05-15 01:09:36.936808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.936976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.937002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.819 qpair failed and we were unable to recover it. 00:22:24.819 [2024-05-15 01:09:36.937187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.937347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.937372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.819 qpair failed and we were unable to recover it. 00:22:24.819 [2024-05-15 01:09:36.937533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.937696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.937723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.819 qpair failed and we were unable to recover it. 00:22:24.819 [2024-05-15 01:09:36.937912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.938067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.938092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.819 qpair failed and we were unable to recover it. 00:22:24.819 [2024-05-15 01:09:36.938251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.938433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.938457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.819 qpair failed and we were unable to recover it. 00:22:24.819 [2024-05-15 01:09:36.938619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.938801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.938825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.819 qpair failed and we were unable to recover it. 00:22:24.819 [2024-05-15 01:09:36.939017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.939176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.939202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.819 qpair failed and we were unable to recover it. 00:22:24.819 [2024-05-15 01:09:36.939368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.939553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.939578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.819 qpair failed and we were unable to recover it. 00:22:24.819 [2024-05-15 01:09:36.939759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.939955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.939985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.819 qpair failed and we were unable to recover it. 00:22:24.819 [2024-05-15 01:09:36.940145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.940331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.940355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.819 qpair failed and we were unable to recover it. 00:22:24.819 [2024-05-15 01:09:36.940531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.940742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.940767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.819 qpair failed and we were unable to recover it. 00:22:24.819 [2024-05-15 01:09:36.940942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.941132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.941159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.819 qpair failed and we were unable to recover it. 00:22:24.819 [2024-05-15 01:09:36.941321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.819 [2024-05-15 01:09:36.941495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.941520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.820 qpair failed and we were unable to recover it. 00:22:24.820 [2024-05-15 01:09:36.941681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.941842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.941867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.820 qpair failed and we were unable to recover it. 00:22:24.820 [2024-05-15 01:09:36.942047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.942258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.942283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.820 qpair failed and we were unable to recover it. 00:22:24.820 [2024-05-15 01:09:36.942510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.942697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.942722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.820 qpair failed and we were unable to recover it. 00:22:24.820 [2024-05-15 01:09:36.942913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.943104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.943131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.820 qpair failed and we were unable to recover it. 00:22:24.820 [2024-05-15 01:09:36.943288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.943475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.943500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.820 qpair failed and we were unable to recover it. 00:22:24.820 [2024-05-15 01:09:36.943651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.943820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.943851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.820 qpair failed and we were unable to recover it. 00:22:24.820 [2024-05-15 01:09:36.944016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.944175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.944201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.820 qpair failed and we were unable to recover it. 00:22:24.820 [2024-05-15 01:09:36.944388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.944569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.944594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.820 qpair failed and we were unable to recover it. 00:22:24.820 [2024-05-15 01:09:36.944783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.944963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.944989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.820 qpair failed and we were unable to recover it. 00:22:24.820 [2024-05-15 01:09:36.945175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.945344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.945369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.820 qpair failed and we were unable to recover it. 00:22:24.820 [2024-05-15 01:09:36.945540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.945705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.945732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.820 qpair failed and we were unable to recover it. 00:22:24.820 [2024-05-15 01:09:36.945894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.946090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.946116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.820 qpair failed and we were unable to recover it. 00:22:24.820 [2024-05-15 01:09:36.946305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.946459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.946484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.820 qpair failed and we were unable to recover it. 00:22:24.820 [2024-05-15 01:09:36.946636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.946830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.946856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.820 qpair failed and we were unable to recover it. 00:22:24.820 [2024-05-15 01:09:36.947041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.947211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.947236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.820 qpair failed and we were unable to recover it. 00:22:24.820 [2024-05-15 01:09:36.947402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.947598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.947628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.820 qpair failed and we were unable to recover it. 00:22:24.820 [2024-05-15 01:09:36.947790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.947967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.947993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.820 qpair failed and we were unable to recover it. 00:22:24.820 [2024-05-15 01:09:36.948165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.948315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.948340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.820 qpair failed and we were unable to recover it. 00:22:24.820 [2024-05-15 01:09:36.948539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.948711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.948737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.820 qpair failed and we were unable to recover it. 00:22:24.820 [2024-05-15 01:09:36.948910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.949079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.949105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.820 qpair failed and we were unable to recover it. 00:22:24.820 [2024-05-15 01:09:36.949264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.949445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.949469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.820 qpair failed and we were unable to recover it. 00:22:24.820 [2024-05-15 01:09:36.949633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.949783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.949808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.820 qpair failed and we were unable to recover it. 00:22:24.820 [2024-05-15 01:09:36.949991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.950171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.950196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.820 qpair failed and we were unable to recover it. 00:22:24.820 [2024-05-15 01:09:36.950378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.950550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.950577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.820 qpair failed and we were unable to recover it. 00:22:24.820 [2024-05-15 01:09:36.950765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.950918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.950949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.820 qpair failed and we were unable to recover it. 00:22:24.820 [2024-05-15 01:09:36.951114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.951275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.820 [2024-05-15 01:09:36.951300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.821 qpair failed and we were unable to recover it. 00:22:24.821 [2024-05-15 01:09:36.951468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.951629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.951654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.821 qpair failed and we were unable to recover it. 00:22:24.821 [2024-05-15 01:09:36.951848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.952005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.952030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.821 qpair failed and we were unable to recover it. 00:22:24.821 [2024-05-15 01:09:36.952206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.952368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.952393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.821 qpair failed and we were unable to recover it. 00:22:24.821 [2024-05-15 01:09:36.952584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.952765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.952789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.821 qpair failed and we were unable to recover it. 00:22:24.821 [2024-05-15 01:09:36.952981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.953142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.953166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.821 qpair failed and we were unable to recover it. 00:22:24.821 [2024-05-15 01:09:36.953325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.953492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.953517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.821 qpair failed and we were unable to recover it. 00:22:24.821 [2024-05-15 01:09:36.953674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.953854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.953878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.821 qpair failed and we were unable to recover it. 00:22:24.821 [2024-05-15 01:09:36.954039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.954214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.954238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.821 qpair failed and we were unable to recover it. 00:22:24.821 [2024-05-15 01:09:36.954420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.954604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.954628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.821 qpair failed and we were unable to recover it. 00:22:24.821 [2024-05-15 01:09:36.954801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.954988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.955013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.821 qpair failed and we were unable to recover it. 00:22:24.821 [2024-05-15 01:09:36.955210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.955362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.955387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.821 qpair failed and we were unable to recover it. 00:22:24.821 [2024-05-15 01:09:36.955554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.955709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.955733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.821 qpair failed and we were unable to recover it. 00:22:24.821 [2024-05-15 01:09:36.955926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.956111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.956135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.821 qpair failed and we were unable to recover it. 00:22:24.821 [2024-05-15 01:09:36.956298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.956459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.956484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.821 qpair failed and we were unable to recover it. 00:22:24.821 [2024-05-15 01:09:36.956678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.956861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.956886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.821 qpair failed and we were unable to recover it. 00:22:24.821 [2024-05-15 01:09:36.957079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.957262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.957285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.821 qpair failed and we were unable to recover it. 00:22:24.821 [2024-05-15 01:09:36.957473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.957662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.957686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.821 qpair failed and we were unable to recover it. 00:22:24.821 [2024-05-15 01:09:36.957850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.958009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.958036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.821 qpair failed and we were unable to recover it. 00:22:24.821 [2024-05-15 01:09:36.958199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.958389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.958414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.821 qpair failed and we were unable to recover it. 00:22:24.821 [2024-05-15 01:09:36.958582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.958761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.958786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.821 qpair failed and we were unable to recover it. 00:22:24.821 [2024-05-15 01:09:36.958978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.959145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.959170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.821 qpair failed and we were unable to recover it. 00:22:24.821 [2024-05-15 01:09:36.959354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.959526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.959551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.821 qpair failed and we were unable to recover it. 00:22:24.821 [2024-05-15 01:09:36.959710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.959872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.959897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.821 qpair failed and we were unable to recover it. 00:22:24.821 [2024-05-15 01:09:36.960086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.960270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.960295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.821 qpair failed and we were unable to recover it. 00:22:24.821 [2024-05-15 01:09:36.960458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.960619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.960644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.821 qpair failed and we were unable to recover it. 00:22:24.821 [2024-05-15 01:09:36.960803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.960965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.960991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.821 qpair failed and we were unable to recover it. 00:22:24.821 [2024-05-15 01:09:36.961157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.961311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.961337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.821 qpair failed and we were unable to recover it. 00:22:24.821 [2024-05-15 01:09:36.961490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.961661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.961686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.821 qpair failed and we were unable to recover it. 00:22:24.821 [2024-05-15 01:09:36.961869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.962031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.821 [2024-05-15 01:09:36.962058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.821 qpair failed and we were unable to recover it. 00:22:24.822 [2024-05-15 01:09:36.962283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.962459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.962484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.822 qpair failed and we were unable to recover it. 00:22:24.822 [2024-05-15 01:09:36.962659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.962817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.962842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.822 qpair failed and we were unable to recover it. 00:22:24.822 [2024-05-15 01:09:36.963010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.963212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.963238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.822 qpair failed and we were unable to recover it. 00:22:24.822 [2024-05-15 01:09:36.963399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.963553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.963579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.822 qpair failed and we were unable to recover it. 00:22:24.822 [2024-05-15 01:09:36.963747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.963905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.963936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.822 qpair failed and we were unable to recover it. 00:22:24.822 [2024-05-15 01:09:36.964124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.964337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.964362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.822 qpair failed and we were unable to recover it. 00:22:24.822 [2024-05-15 01:09:36.964526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.964700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.964724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.822 qpair failed and we were unable to recover it. 00:22:24.822 [2024-05-15 01:09:36.964907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.965078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.965103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.822 qpair failed and we were unable to recover it. 00:22:24.822 [2024-05-15 01:09:36.965288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.965462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.965487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.822 qpair failed and we were unable to recover it. 00:22:24.822 [2024-05-15 01:09:36.965644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.965803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.965828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.822 qpair failed and we were unable to recover it. 00:22:24.822 [2024-05-15 01:09:36.966027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.966208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.966234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.822 qpair failed and we were unable to recover it. 00:22:24.822 [2024-05-15 01:09:36.966435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.966613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.966638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.822 qpair failed and we were unable to recover it. 00:22:24.822 [2024-05-15 01:09:36.966796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.966965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.966993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.822 qpair failed and we were unable to recover it. 00:22:24.822 [2024-05-15 01:09:36.967156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.967318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.967343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.822 qpair failed and we were unable to recover it. 00:22:24.822 [2024-05-15 01:09:36.967505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.967664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.967688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.822 qpair failed and we were unable to recover it. 00:22:24.822 [2024-05-15 01:09:36.967839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.968007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.968033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.822 qpair failed and we were unable to recover it. 00:22:24.822 [2024-05-15 01:09:36.968214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.968375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.968401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.822 qpair failed and we were unable to recover it. 00:22:24.822 [2024-05-15 01:09:36.968578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.968764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.968788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.822 qpair failed and we were unable to recover it. 00:22:24.822 [2024-05-15 01:09:36.968968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.969157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.969182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.822 qpair failed and we were unable to recover it. 00:22:24.822 [2024-05-15 01:09:36.969343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.969512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.822 [2024-05-15 01:09:36.969537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.822 qpair failed and we were unable to recover it. 00:22:24.823 [2024-05-15 01:09:36.969707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.969879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.969904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.823 qpair failed and we were unable to recover it. 00:22:24.823 [2024-05-15 01:09:36.970082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.970250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.970277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.823 qpair failed and we were unable to recover it. 00:22:24.823 [2024-05-15 01:09:36.970443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.970630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.970656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.823 qpair failed and we were unable to recover it. 00:22:24.823 [2024-05-15 01:09:36.970829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.970993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.971019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.823 qpair failed and we were unable to recover it. 00:22:24.823 [2024-05-15 01:09:36.971190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.971353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.971378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.823 qpair failed and we were unable to recover it. 00:22:24.823 [2024-05-15 01:09:36.971539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.971730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.971755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.823 qpair failed and we were unable to recover it. 00:22:24.823 [2024-05-15 01:09:36.971911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.972082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.972108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.823 qpair failed and we were unable to recover it. 00:22:24.823 [2024-05-15 01:09:36.972302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.972492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.972517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.823 qpair failed and we were unable to recover it. 00:22:24.823 [2024-05-15 01:09:36.972678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.972840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.972865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.823 qpair failed and we were unable to recover it. 00:22:24.823 [2024-05-15 01:09:36.973025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.973208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.973233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.823 qpair failed and we were unable to recover it. 00:22:24.823 [2024-05-15 01:09:36.973410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.973600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.973625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63e8000b90 with addr=10.0.0.2, port=4420 00:22:24.823 qpair failed and we were unable to recover it. 00:22:24.823 [2024-05-15 01:09:36.973821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.973998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.974028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.823 qpair failed and we were unable to recover it. 00:22:24.823 [2024-05-15 01:09:36.974223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.974385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.974412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.823 qpair failed and we were unable to recover it. 00:22:24.823 [2024-05-15 01:09:36.974612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.974805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.974829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.823 qpair failed and we were unable to recover it. 00:22:24.823 [2024-05-15 01:09:36.974999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.975177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.975202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.823 qpair failed and we were unable to recover it. 00:22:24.823 [2024-05-15 01:09:36.975391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.975575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.975600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.823 qpair failed and we were unable to recover it. 00:22:24.823 [2024-05-15 01:09:36.975762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.975946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.975972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.823 qpair failed and we were unable to recover it. 00:22:24.823 [2024-05-15 01:09:36.976245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.976510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.976535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.823 qpair failed and we were unable to recover it. 00:22:24.823 [2024-05-15 01:09:36.976692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.976854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.976879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.823 qpair failed and we were unable to recover it. 00:22:24.823 [2024-05-15 01:09:36.977059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.977223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.977248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.823 qpair failed and we were unable to recover it. 00:22:24.823 [2024-05-15 01:09:36.977413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.977575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.977601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.823 qpair failed and we were unable to recover it. 00:22:24.823 [2024-05-15 01:09:36.977782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.977975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.978001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.823 qpair failed and we were unable to recover it. 00:22:24.823 [2024-05-15 01:09:36.978164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.978332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.978357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.823 qpair failed and we were unable to recover it. 00:22:24.823 [2024-05-15 01:09:36.978536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.978690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.978715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.823 qpair failed and we were unable to recover it. 00:22:24.823 [2024-05-15 01:09:36.978896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.979062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.979088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.823 qpair failed and we were unable to recover it. 00:22:24.823 [2024-05-15 01:09:36.979357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.979520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.979545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.823 qpair failed and we were unable to recover it. 00:22:24.823 [2024-05-15 01:09:36.979704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.979865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.979890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.823 qpair failed and we were unable to recover it. 00:22:24.823 [2024-05-15 01:09:36.980082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.980243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.823 [2024-05-15 01:09:36.980269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.823 qpair failed and we were unable to recover it. 00:22:24.823 [2024-05-15 01:09:36.980454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.980612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.980638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.824 qpair failed and we were unable to recover it. 00:22:24.824 [2024-05-15 01:09:36.980813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.980971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.980997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.824 qpair failed and we were unable to recover it. 00:22:24.824 [2024-05-15 01:09:36.981177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.981333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.981358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.824 qpair failed and we were unable to recover it. 00:22:24.824 [2024-05-15 01:09:36.981625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.981782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.981808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.824 qpair failed and we were unable to recover it. 00:22:24.824 [2024-05-15 01:09:36.981989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.982145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.982170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.824 qpair failed and we were unable to recover it. 00:22:24.824 [2024-05-15 01:09:36.982359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.982542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.982567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.824 qpair failed and we were unable to recover it. 00:22:24.824 [2024-05-15 01:09:36.982727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.982938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.982964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.824 qpair failed and we were unable to recover it. 00:22:24.824 [2024-05-15 01:09:36.983129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.983288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.983314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.824 qpair failed and we were unable to recover it. 00:22:24.824 [2024-05-15 01:09:36.983506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.983664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.983690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.824 qpair failed and we were unable to recover it. 00:22:24.824 [2024-05-15 01:09:36.983843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.984023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.984049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.824 qpair failed and we were unable to recover it. 00:22:24.824 [2024-05-15 01:09:36.984316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.984509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.984535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.824 qpair failed and we were unable to recover it. 00:22:24.824 [2024-05-15 01:09:36.984723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.984899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.984924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.824 qpair failed and we were unable to recover it. 00:22:24.824 [2024-05-15 01:09:36.985148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.985326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.985351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.824 qpair failed and we were unable to recover it. 00:22:24.824 [2024-05-15 01:09:36.985513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.985674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.985699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.824 qpair failed and we were unable to recover it. 00:22:24.824 [2024-05-15 01:09:36.985856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.986020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.986047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.824 qpair failed and we were unable to recover it. 00:22:24.824 [2024-05-15 01:09:36.986208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.986368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.986394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.824 qpair failed and we were unable to recover it. 00:22:24.824 [2024-05-15 01:09:36.986570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.986753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.986776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.824 qpair failed and we were unable to recover it. 00:22:24.824 [2024-05-15 01:09:36.986973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.987247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.987273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.824 qpair failed and we were unable to recover it. 00:22:24.824 [2024-05-15 01:09:36.987432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.987622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.987646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.824 qpair failed and we were unable to recover it. 00:22:24.824 [2024-05-15 01:09:36.987830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.987995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.988022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.824 qpair failed and we were unable to recover it. 00:22:24.824 [2024-05-15 01:09:36.988186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.988349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.988375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.824 qpair failed and we were unable to recover it. 00:22:24.824 [2024-05-15 01:09:36.988727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.988893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.988918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.824 qpair failed and we were unable to recover it. 00:22:24.824 [2024-05-15 01:09:36.989087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.989254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.989279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.824 qpair failed and we were unable to recover it. 00:22:24.824 [2024-05-15 01:09:36.989470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.989640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.989665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.824 qpair failed and we were unable to recover it. 00:22:24.824 [2024-05-15 01:09:36.989851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.990031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.990058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.824 qpair failed and we were unable to recover it. 00:22:24.824 [2024-05-15 01:09:36.990228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.990385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.990409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.824 qpair failed and we were unable to recover it. 00:22:24.824 [2024-05-15 01:09:36.990575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.990767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.824 [2024-05-15 01:09:36.990793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.824 qpair failed and we were unable to recover it. 00:22:24.825 [2024-05-15 01:09:36.990953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.991108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.991133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.825 qpair failed and we were unable to recover it. 00:22:24.825 [2024-05-15 01:09:36.991318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.991480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.991506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.825 qpair failed and we were unable to recover it. 00:22:24.825 [2024-05-15 01:09:36.991684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.991874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.991898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.825 qpair failed and we were unable to recover it. 00:22:24.825 [2024-05-15 01:09:36.992072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.992234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.992258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.825 qpair failed and we were unable to recover it. 00:22:24.825 [2024-05-15 01:09:36.992442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.992601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.992626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.825 qpair failed and we were unable to recover it. 00:22:24.825 [2024-05-15 01:09:36.992810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.992976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.993002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.825 qpair failed and we were unable to recover it. 00:22:24.825 [2024-05-15 01:09:36.993165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.993334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.993359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.825 qpair failed and we were unable to recover it. 00:22:24.825 [2024-05-15 01:09:36.993524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.993684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.993709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.825 qpair failed and we were unable to recover it. 00:22:24.825 [2024-05-15 01:09:36.993897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.994071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.994098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.825 qpair failed and we were unable to recover it. 00:22:24.825 [2024-05-15 01:09:36.994290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.994467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.994492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.825 qpair failed and we were unable to recover it. 00:22:24.825 [2024-05-15 01:09:36.994647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.994801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.994826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.825 qpair failed and we were unable to recover it. 00:22:24.825 [2024-05-15 01:09:36.995012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.995171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.995196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.825 qpair failed and we were unable to recover it. 00:22:24.825 [2024-05-15 01:09:36.995383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.995567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.995592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.825 qpair failed and we were unable to recover it. 00:22:24.825 [2024-05-15 01:09:36.995746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.995946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.995972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.825 qpair failed and we were unable to recover it. 00:22:24.825 [2024-05-15 01:09:36.996137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.996295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.996320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.825 qpair failed and we were unable to recover it. 00:22:24.825 [2024-05-15 01:09:36.996496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.996685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.996710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.825 qpair failed and we were unable to recover it. 00:22:24.825 [2024-05-15 01:09:36.996895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.997056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.997086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.825 qpair failed and we were unable to recover it. 00:22:24.825 [2024-05-15 01:09:36.997256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.997457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.997482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.825 qpair failed and we were unable to recover it. 00:22:24.825 [2024-05-15 01:09:36.997685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.997875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.997899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.825 qpair failed and we were unable to recover it. 00:22:24.825 [2024-05-15 01:09:36.998099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.998267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.998291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.825 qpair failed and we were unable to recover it. 00:22:24.825 [2024-05-15 01:09:36.998450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.998637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.998662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.825 qpair failed and we were unable to recover it. 00:22:24.825 [2024-05-15 01:09:36.998852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.999051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.999077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.825 qpair failed and we were unable to recover it. 00:22:24.825 [2024-05-15 01:09:36.999242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.999407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.999431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.825 qpair failed and we were unable to recover it. 00:22:24.825 [2024-05-15 01:09:36.999619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.999789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:36.999815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.825 qpair failed and we were unable to recover it. 00:22:24.825 [2024-05-15 01:09:36.999982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:37.000172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:37.000197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.825 qpair failed and we were unable to recover it. 00:22:24.825 [2024-05-15 01:09:37.000353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:37.000616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:37.000641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.825 qpair failed and we were unable to recover it. 00:22:24.825 [2024-05-15 01:09:37.000798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:37.000965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:37.000995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.825 qpair failed and we were unable to recover it. 00:22:24.825 [2024-05-15 01:09:37.001163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:37.001325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.825 [2024-05-15 01:09:37.001352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.826 qpair failed and we were unable to recover it. 00:22:24.826 [2024-05-15 01:09:37.001521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.001693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.001718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.826 qpair failed and we were unable to recover it. 00:22:24.826 [2024-05-15 01:09:37.001876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.002042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.002068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.826 qpair failed and we were unable to recover it. 00:22:24.826 [2024-05-15 01:09:37.002248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.002436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.002461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.826 qpair failed and we were unable to recover it. 00:22:24.826 [2024-05-15 01:09:37.002651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.002812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.002837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.826 qpair failed and we were unable to recover it. 00:22:24.826 [2024-05-15 01:09:37.003003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.003175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.003200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.826 qpair failed and we were unable to recover it. 00:22:24.826 [2024-05-15 01:09:37.003388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.003551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.003577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.826 qpair failed and we were unable to recover it. 00:22:24.826 [2024-05-15 01:09:37.003739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.003897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.003923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.826 qpair failed and we were unable to recover it. 00:22:24.826 [2024-05-15 01:09:37.004087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.004245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.004271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.826 qpair failed and we were unable to recover it. 00:22:24.826 [2024-05-15 01:09:37.004459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.004622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.004651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.826 qpair failed and we were unable to recover it. 00:22:24.826 [2024-05-15 01:09:37.004818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.005008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.005034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.826 qpair failed and we were unable to recover it. 00:22:24.826 [2024-05-15 01:09:37.005303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.005484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.005509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.826 qpair failed and we were unable to recover it. 00:22:24.826 [2024-05-15 01:09:37.005669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.005852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.005877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.826 qpair failed and we were unable to recover it. 00:22:24.826 [2024-05-15 01:09:37.006071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.006230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.006254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.826 qpair failed and we were unable to recover it. 00:22:24.826 [2024-05-15 01:09:37.006436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.006615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.006640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.826 qpair failed and we were unable to recover it. 00:22:24.826 [2024-05-15 01:09:37.006800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.006961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.006987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.826 qpair failed and we were unable to recover it. 00:22:24.826 [2024-05-15 01:09:37.007149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.007324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.007349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.826 qpair failed and we were unable to recover it. 00:22:24.826 [2024-05-15 01:09:37.007564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.007763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.007789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.826 qpair failed and we were unable to recover it. 00:22:24.826 [2024-05-15 01:09:37.007952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.008107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.008132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.826 qpair failed and we were unable to recover it. 00:22:24.826 [2024-05-15 01:09:37.008296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.008483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.008513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.826 qpair failed and we were unable to recover it. 00:22:24.826 [2024-05-15 01:09:37.008696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.008882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.008907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.826 qpair failed and we were unable to recover it. 00:22:24.826 [2024-05-15 01:09:37.009073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.009251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.009277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.826 qpair failed and we were unable to recover it. 00:22:24.826 [2024-05-15 01:09:37.009430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.009585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.009610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.826 qpair failed and we were unable to recover it. 00:22:24.826 [2024-05-15 01:09:37.009878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.010035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.010061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.826 qpair failed and we were unable to recover it. 00:22:24.826 [2024-05-15 01:09:37.010239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.010393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.010418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.826 qpair failed and we were unable to recover it. 00:22:24.826 [2024-05-15 01:09:37.010610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.010770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.010794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.826 qpair failed and we were unable to recover it. 00:22:24.826 [2024-05-15 01:09:37.010956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.011109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.011134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.826 qpair failed and we were unable to recover it. 00:22:24.826 [2024-05-15 01:09:37.011297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.011460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.011485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.826 qpair failed and we were unable to recover it. 00:22:24.826 [2024-05-15 01:09:37.011680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.011876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.011903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.826 qpair failed and we were unable to recover it. 00:22:24.826 [2024-05-15 01:09:37.012094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.012250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.826 [2024-05-15 01:09:37.012275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.827 qpair failed and we were unable to recover it. 00:22:24.827 [2024-05-15 01:09:37.012491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.012694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.012719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.827 qpair failed and we were unable to recover it. 00:22:24.827 [2024-05-15 01:09:37.012880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.013095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.013121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.827 qpair failed and we were unable to recover it. 00:22:24.827 [2024-05-15 01:09:37.013286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.013476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.013501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.827 qpair failed and we were unable to recover it. 00:22:24.827 [2024-05-15 01:09:37.013673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.013830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.013854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.827 qpair failed and we were unable to recover it. 00:22:24.827 [2024-05-15 01:09:37.014036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.014249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.014274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.827 qpair failed and we were unable to recover it. 00:22:24.827 [2024-05-15 01:09:37.014460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.014619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.014643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.827 qpair failed and we were unable to recover it. 00:22:24.827 [2024-05-15 01:09:37.014832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.014993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.015019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.827 qpair failed and we were unable to recover it. 00:22:24.827 [2024-05-15 01:09:37.015231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.015388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.015413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.827 qpair failed and we were unable to recover it. 00:22:24.827 [2024-05-15 01:09:37.015591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.015773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.015798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.827 qpair failed and we were unable to recover it. 00:22:24.827 [2024-05-15 01:09:37.015987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.016190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.016215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.827 qpair failed and we were unable to recover it. 00:22:24.827 [2024-05-15 01:09:37.016405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.016561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.016587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.827 qpair failed and we were unable to recover it. 00:22:24.827 [2024-05-15 01:09:37.016744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.016912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.017046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.827 qpair failed and we were unable to recover it. 00:22:24.827 [2024-05-15 01:09:37.017227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.017383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.017408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.827 qpair failed and we were unable to recover it. 00:22:24.827 [2024-05-15 01:09:37.017564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.017749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.017774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.827 qpair failed and we were unable to recover it. 00:22:24.827 [2024-05-15 01:09:37.017928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.018098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.018123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.827 qpair failed and we were unable to recover it. 00:22:24.827 [2024-05-15 01:09:37.018279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.018459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.018485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.827 qpair failed and we were unable to recover it. 00:22:24.827 [2024-05-15 01:09:37.018637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.018819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.018844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.827 qpair failed and we were unable to recover it. 00:22:24.827 [2024-05-15 01:09:37.019031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.019190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.019215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.827 qpair failed and we were unable to recover it. 00:22:24.827 [2024-05-15 01:09:37.019399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.019555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.019580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.827 qpair failed and we were unable to recover it. 00:22:24.827 [2024-05-15 01:09:37.019756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.019940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.019965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.827 qpair failed and we were unable to recover it. 00:22:24.827 [2024-05-15 01:09:37.020130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.020311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.020336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.827 qpair failed and we were unable to recover it. 00:22:24.827 [2024-05-15 01:09:37.020496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.020660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.020688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.827 qpair failed and we were unable to recover it. 00:22:24.827 [2024-05-15 01:09:37.020843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.021042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.021069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.827 qpair failed and we were unable to recover it. 00:22:24.827 [2024-05-15 01:09:37.021234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.021414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.827 [2024-05-15 01:09:37.021440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.828 qpair failed and we were unable to recover it. 00:22:24.828 [2024-05-15 01:09:37.021619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.021839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.021865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.828 qpair failed and we were unable to recover it. 00:22:24.828 [2024-05-15 01:09:37.022024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.022183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.022209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.828 qpair failed and we were unable to recover it. 00:22:24.828 [2024-05-15 01:09:37.022364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.022540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.022565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.828 qpair failed and we were unable to recover it. 00:22:24.828 [2024-05-15 01:09:37.022752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.022914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.022948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.828 qpair failed and we were unable to recover it. 00:22:24.828 [2024-05-15 01:09:37.023133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.023299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.023326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.828 qpair failed and we were unable to recover it. 00:22:24.828 [2024-05-15 01:09:37.023519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.023687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.023712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.828 qpair failed and we were unable to recover it. 00:22:24.828 [2024-05-15 01:09:37.023900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.024094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.024121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.828 qpair failed and we were unable to recover it. 00:22:24.828 [2024-05-15 01:09:37.024284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.024464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.024489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.828 qpair failed and we were unable to recover it. 00:22:24.828 [2024-05-15 01:09:37.024676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.024866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.024891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.828 qpair failed and we were unable to recover it. 00:22:24.828 [2024-05-15 01:09:37.025067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.025221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.025247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.828 qpair failed and we were unable to recover it. 00:22:24.828 [2024-05-15 01:09:37.025413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.025589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.025614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.828 qpair failed and we were unable to recover it. 00:22:24.828 [2024-05-15 01:09:37.025807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.025975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.026001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.828 qpair failed and we were unable to recover it. 00:22:24.828 [2024-05-15 01:09:37.026209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.026405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.026429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.828 qpair failed and we were unable to recover it. 00:22:24.828 [2024-05-15 01:09:37.026588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.026745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.026770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.828 qpair failed and we were unable to recover it. 00:22:24.828 [2024-05-15 01:09:37.026939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.027101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.027126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.828 qpair failed and we were unable to recover it. 00:22:24.828 [2024-05-15 01:09:37.027286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.027454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.027480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.828 qpair failed and we were unable to recover it. 00:22:24.828 [2024-05-15 01:09:37.027755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.027911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.027942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.828 qpair failed and we were unable to recover it. 00:22:24.828 [2024-05-15 01:09:37.028104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.028261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.028286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.828 qpair failed and we were unable to recover it. 00:22:24.828 [2024-05-15 01:09:37.028501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.028714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.028739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.828 qpair failed and we were unable to recover it. 00:22:24.828 [2024-05-15 01:09:37.028922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.029098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.029123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.828 qpair failed and we were unable to recover it. 00:22:24.828 [2024-05-15 01:09:37.029279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.029467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.029492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.828 qpair failed and we were unable to recover it. 00:22:24.828 [2024-05-15 01:09:37.029677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.029838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.029863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.828 qpair failed and we were unable to recover it. 00:22:24.828 [2024-05-15 01:09:37.030030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.030247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.030273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.828 qpair failed and we were unable to recover it. 00:22:24.828 [2024-05-15 01:09:37.030438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.030619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.030645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.828 qpair failed and we were unable to recover it. 00:22:24.828 [2024-05-15 01:09:37.030827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.030989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.031015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.828 qpair failed and we were unable to recover it. 00:22:24.828 [2024-05-15 01:09:37.031202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.031361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.031386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.828 qpair failed and we were unable to recover it. 00:22:24.828 [2024-05-15 01:09:37.031659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.031875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.031899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.828 qpair failed and we were unable to recover it. 00:22:24.828 [2024-05-15 01:09:37.032071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.032229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.032253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.828 qpair failed and we were unable to recover it. 00:22:24.828 [2024-05-15 01:09:37.032414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.032573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.828 [2024-05-15 01:09:37.032601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.828 qpair failed and we were unable to recover it. 00:22:24.828 [2024-05-15 01:09:37.032792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.032969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.032995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.829 qpair failed and we were unable to recover it. 00:22:24.829 [2024-05-15 01:09:37.033201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.033358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.033383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.829 qpair failed and we were unable to recover it. 00:22:24.829 [2024-05-15 01:09:37.033547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.033708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.033735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.829 qpair failed and we were unable to recover it. 00:22:24.829 [2024-05-15 01:09:37.033902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.034066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.034091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.829 qpair failed and we were unable to recover it. 00:22:24.829 [2024-05-15 01:09:37.034275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.034457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.034482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.829 qpair failed and we were unable to recover it. 00:22:24.829 [2024-05-15 01:09:37.034751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.034911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.034942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.829 qpair failed and we were unable to recover it. 00:22:24.829 [2024-05-15 01:09:37.035123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.035295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.035320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.829 qpair failed and we were unable to recover it. 00:22:24.829 [2024-05-15 01:09:37.035493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.035681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.035706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.829 qpair failed and we were unable to recover it. 00:22:24.829 [2024-05-15 01:09:37.035879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.036031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.036057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.829 qpair failed and we were unable to recover it. 00:22:24.829 [2024-05-15 01:09:37.036243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.036451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.036476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.829 qpair failed and we were unable to recover it. 00:22:24.829 [2024-05-15 01:09:37.036663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.036847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.036872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.829 qpair failed and we were unable to recover it. 00:22:24.829 [2024-05-15 01:09:37.037027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.037195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.037221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.829 qpair failed and we were unable to recover it. 00:22:24.829 [2024-05-15 01:09:37.037428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.037593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.037619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.829 qpair failed and we were unable to recover it. 00:22:24.829 [2024-05-15 01:09:37.037780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.037974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.038001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.829 qpair failed and we were unable to recover it. 00:22:24.829 [2024-05-15 01:09:37.038152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.038315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.038340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.829 qpair failed and we were unable to recover it. 00:22:24.829 [2024-05-15 01:09:37.038497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.038676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.038701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.829 qpair failed and we were unable to recover it. 00:22:24.829 [2024-05-15 01:09:37.038889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.039046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.039072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f0000b90 with addr=10.0.0.2, port=4420 00:22:24.829 qpair failed and we were unable to recover it. 00:22:24.829 [2024-05-15 01:09:37.039272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.039481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.039509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.829 qpair failed and we were unable to recover it. 00:22:24.829 [2024-05-15 01:09:37.039672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.039847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.039872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.829 qpair failed and we were unable to recover it. 00:22:24.829 [2024-05-15 01:09:37.040036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.040207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.040235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.829 qpair failed and we were unable to recover it. 00:22:24.829 [2024-05-15 01:09:37.040397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.040557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.040582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.829 qpair failed and we were unable to recover it. 00:22:24.829 [2024-05-15 01:09:37.040743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.040926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.040958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.829 qpair failed and we were unable to recover it. 00:22:24.829 [2024-05-15 01:09:37.041154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.041346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.041371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.829 qpair failed and we were unable to recover it. 00:22:24.829 [2024-05-15 01:09:37.041524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.041694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.041721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.829 qpair failed and we were unable to recover it. 00:22:24.829 [2024-05-15 01:09:37.041916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.042117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.829 [2024-05-15 01:09:37.042143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.829 qpair failed and we were unable to recover it. 00:22:24.829 [2024-05-15 01:09:37.042307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.042455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.042480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.830 qpair failed and we were unable to recover it. 00:22:24.830 [2024-05-15 01:09:37.042649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.042811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.042838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.830 qpair failed and we were unable to recover it. 00:22:24.830 [2024-05-15 01:09:37.043031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.043219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.043244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.830 qpair failed and we were unable to recover it. 00:22:24.830 [2024-05-15 01:09:37.043401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.043559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.043584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.830 qpair failed and we were unable to recover it. 00:22:24.830 [2024-05-15 01:09:37.043742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.043896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.043923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.830 qpair failed and we were unable to recover it. 00:22:24.830 [2024-05-15 01:09:37.044115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.044273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.044298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.830 qpair failed and we were unable to recover it. 00:22:24.830 [2024-05-15 01:09:37.044483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.044693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.044718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.830 qpair failed and we were unable to recover it. 00:22:24.830 [2024-05-15 01:09:37.044877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.045039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.045065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.830 qpair failed and we were unable to recover it. 00:22:24.830 [2024-05-15 01:09:37.045230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.045382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.045407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.830 qpair failed and we were unable to recover it. 00:22:24.830 [2024-05-15 01:09:37.045621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.045777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.045801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.830 qpair failed and we were unable to recover it. 00:22:24.830 [2024-05-15 01:09:37.045970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.046148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.046173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.830 qpair failed and we were unable to recover it. 00:22:24.830 [2024-05-15 01:09:37.046330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.046518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.046542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.830 qpair failed and we were unable to recover it. 00:22:24.830 [2024-05-15 01:09:37.046721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.046876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.046901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.830 qpair failed and we were unable to recover it. 00:22:24.830 [2024-05-15 01:09:37.047066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.047222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.047246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.830 qpair failed and we were unable to recover it. 00:22:24.830 [2024-05-15 01:09:37.047432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.047583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.047608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.830 qpair failed and we were unable to recover it. 00:22:24.830 [2024-05-15 01:09:37.047799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.047987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.048012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.830 qpair failed and we were unable to recover it. 00:22:24.830 [2024-05-15 01:09:37.048188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.048402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.048427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.830 qpair failed and we were unable to recover it. 00:22:24.830 [2024-05-15 01:09:37.048588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.048751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.048778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.830 qpair failed and we were unable to recover it. 00:22:24.830 [2024-05-15 01:09:37.048969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.049134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.049159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.830 qpair failed and we were unable to recover it. 00:22:24.830 [2024-05-15 01:09:37.049353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.049570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.049595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.830 qpair failed and we were unable to recover it. 00:22:24.830 [2024-05-15 01:09:37.049750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.049912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.049944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.830 qpair failed and we were unable to recover it. 00:22:24.830 [2024-05-15 01:09:37.050114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.050267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.050292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.830 qpair failed and we were unable to recover it. 00:22:24.830 [2024-05-15 01:09:37.050450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.050671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.050696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.830 qpair failed and we were unable to recover it. 00:22:24.830 [2024-05-15 01:09:37.050875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.051043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.051070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.830 qpair failed and we were unable to recover it. 00:22:24.830 [2024-05-15 01:09:37.051236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.051396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.051422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.830 qpair failed and we were unable to recover it. 00:22:24.830 [2024-05-15 01:09:37.051650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.051812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.051838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.830 qpair failed and we were unable to recover it. 00:22:24.830 [2024-05-15 01:09:37.052001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.830 [2024-05-15 01:09:37.052162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.052186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.831 qpair failed and we were unable to recover it. 00:22:24.831 [2024-05-15 01:09:37.052371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.052552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.052577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.831 qpair failed and we were unable to recover it. 00:22:24.831 [2024-05-15 01:09:37.052737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.052919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.052951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.831 qpair failed and we were unable to recover it. 00:22:24.831 [2024-05-15 01:09:37.053118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.053284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.053309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.831 qpair failed and we were unable to recover it. 00:22:24.831 [2024-05-15 01:09:37.053493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.053644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.053668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.831 qpair failed and we were unable to recover it. 00:22:24.831 [2024-05-15 01:09:37.053836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.054040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.054066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.831 qpair failed and we were unable to recover it. 00:22:24.831 [2024-05-15 01:09:37.054252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.054411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.054440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.831 qpair failed and we were unable to recover it. 00:22:24.831 [2024-05-15 01:09:37.054654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.054812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.054837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.831 qpair failed and we were unable to recover it. 00:22:24.831 [2024-05-15 01:09:37.055003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.055169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.055196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.831 qpair failed and we were unable to recover it. 00:22:24.831 [2024-05-15 01:09:37.055380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.055553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.055578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.831 qpair failed and we were unable to recover it. 00:22:24.831 [2024-05-15 01:09:37.055772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.055939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.055964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.831 qpair failed and we were unable to recover it. 00:22:24.831 [2024-05-15 01:09:37.056134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.056297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.056322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.831 qpair failed and we were unable to recover it. 00:22:24.831 [2024-05-15 01:09:37.056475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.056627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.056652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.831 qpair failed and we were unable to recover it. 00:22:24.831 [2024-05-15 01:09:37.056843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.057027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.057053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.831 qpair failed and we were unable to recover it. 00:22:24.831 [2024-05-15 01:09:37.057240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.057393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.057418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.831 qpair failed and we were unable to recover it. 00:22:24.831 [2024-05-15 01:09:37.057575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.057726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.057751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.831 qpair failed and we were unable to recover it. 00:22:24.831 [2024-05-15 01:09:37.057926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.058090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.058122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.831 qpair failed and we were unable to recover it. 00:22:24.831 [2024-05-15 01:09:37.058307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.058460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.058485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.831 qpair failed and we were unable to recover it. 00:22:24.831 [2024-05-15 01:09:37.058667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.058850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.058875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.831 qpair failed and we were unable to recover it. 00:22:24.831 [2024-05-15 01:09:37.059068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.059253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.059278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.831 qpair failed and we were unable to recover it. 00:22:24.831 [2024-05-15 01:09:37.059435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.059656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.059681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.831 qpair failed and we were unable to recover it. 00:22:24.831 [2024-05-15 01:09:37.059836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.059990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.060016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.831 qpair failed and we were unable to recover it. 00:22:24.831 [2024-05-15 01:09:37.060233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.060391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.060416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.831 qpair failed and we were unable to recover it. 00:22:24.831 [2024-05-15 01:09:37.060585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.060735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.060760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.831 qpair failed and we were unable to recover it. 00:22:24.831 [2024-05-15 01:09:37.060947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.061143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.061170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.831 qpair failed and we were unable to recover it. 00:22:24.831 [2024-05-15 01:09:37.061354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.061537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.061562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.831 qpair failed and we were unable to recover it. 00:22:24.831 [2024-05-15 01:09:37.061772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.061964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.061994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.831 qpair failed and we were unable to recover it. 00:22:24.831 [2024-05-15 01:09:37.062158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.062314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.062339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.831 qpair failed and we were unable to recover it. 00:22:24.831 [2024-05-15 01:09:37.062498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.062680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.062704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.831 qpair failed and we were unable to recover it. 00:22:24.831 [2024-05-15 01:09:37.062890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.831 [2024-05-15 01:09:37.063108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.063133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.832 qpair failed and we were unable to recover it. 00:22:24.832 [2024-05-15 01:09:37.063317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.063504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.063529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.832 qpair failed and we were unable to recover it. 00:22:24.832 [2024-05-15 01:09:37.063708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.063937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.063962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.832 qpair failed and we were unable to recover it. 00:22:24.832 [2024-05-15 01:09:37.064135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.064293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.064317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.832 qpair failed and we were unable to recover it. 00:22:24.832 [2024-05-15 01:09:37.064512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.064702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.064726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.832 qpair failed and we were unable to recover it. 00:22:24.832 [2024-05-15 01:09:37.064956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.065117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.065143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.832 qpair failed and we were unable to recover it. 00:22:24.832 [2024-05-15 01:09:37.065328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.065483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.065510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.832 qpair failed and we were unable to recover it. 00:22:24.832 [2024-05-15 01:09:37.065674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.065825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.065854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.832 qpair failed and we were unable to recover it. 00:22:24.832 [2024-05-15 01:09:37.066024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.066214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.066240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.832 qpair failed and we were unable to recover it. 00:22:24.832 [2024-05-15 01:09:37.066425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.066585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.066610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.832 qpair failed and we were unable to recover it. 00:22:24.832 [2024-05-15 01:09:37.066774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.066986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.067012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.832 qpair failed and we were unable to recover it. 00:22:24.832 [2024-05-15 01:09:37.067178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.067334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.067359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.832 qpair failed and we were unable to recover it. 00:22:24.832 [2024-05-15 01:09:37.067549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.067733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.067758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.832 qpair failed and we were unable to recover it. 00:22:24.832 [2024-05-15 01:09:37.067915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.068102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.068127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.832 qpair failed and we were unable to recover it. 00:22:24.832 [2024-05-15 01:09:37.068318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.068473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.068498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.832 qpair failed and we were unable to recover it. 00:22:24.832 [2024-05-15 01:09:37.068664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.068844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.068869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.832 qpair failed and we were unable to recover it. 00:22:24.832 [2024-05-15 01:09:37.069060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.832 [2024-05-15 01:09:37.069255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.069280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.833 qpair failed and we were unable to recover it. 00:22:24.833 [2024-05-15 01:09:37.069435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.069622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.069646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.833 qpair failed and we were unable to recover it. 00:22:24.833 [2024-05-15 01:09:37.069831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.069989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.070015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.833 qpair failed and we were unable to recover it. 00:22:24.833 [2024-05-15 01:09:37.070169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.070359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.070383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.833 qpair failed and we were unable to recover it. 00:22:24.833 [2024-05-15 01:09:37.070564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.070719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.070743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.833 qpair failed and we were unable to recover it. 00:22:24.833 [2024-05-15 01:09:37.070938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.071088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.071113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.833 qpair failed and we were unable to recover it. 00:22:24.833 [2024-05-15 01:09:37.071267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.071432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.071457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.833 qpair failed and we were unable to recover it. 00:22:24.833 [2024-05-15 01:09:37.071662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.071871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.071896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.833 qpair failed and we were unable to recover it. 00:22:24.833 [2024-05-15 01:09:37.072096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.072280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.072304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.833 qpair failed and we were unable to recover it. 00:22:24.833 [2024-05-15 01:09:37.072484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.072668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.072693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.833 qpair failed and we were unable to recover it. 00:22:24.833 [2024-05-15 01:09:37.072885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.073044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.073070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.833 qpair failed and we were unable to recover it. 00:22:24.833 [2024-05-15 01:09:37.073231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.073416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.073440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.833 qpair failed and we were unable to recover it. 00:22:24.833 [2024-05-15 01:09:37.073602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.073794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.073820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.833 qpair failed and we were unable to recover it. 00:22:24.833 [2024-05-15 01:09:37.073977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.074147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.074172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.833 qpair failed and we were unable to recover it. 00:22:24.833 [2024-05-15 01:09:37.074365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.074521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.074546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.833 qpair failed and we were unable to recover it. 00:22:24.833 [2024-05-15 01:09:37.074702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.074856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.074882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.833 qpair failed and we were unable to recover it. 00:22:24.833 [2024-05-15 01:09:37.075068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.075257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.075282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.833 qpair failed and we were unable to recover it. 00:22:24.833 [2024-05-15 01:09:37.075435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.075647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.075672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.833 qpair failed and we were unable to recover it. 00:22:24.833 [2024-05-15 01:09:37.075829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.075991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.076016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.833 qpair failed and we were unable to recover it. 00:22:24.833 [2024-05-15 01:09:37.076192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.076346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.076372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.833 qpair failed and we were unable to recover it. 00:22:24.833 [2024-05-15 01:09:37.076555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.076712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.076737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.833 qpair failed and we were unable to recover it. 00:22:24.833 [2024-05-15 01:09:37.076914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.077098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.077123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.833 qpair failed and we were unable to recover it. 00:22:24.833 [2024-05-15 01:09:37.077316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.077495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.077520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.833 qpair failed and we were unable to recover it. 00:22:24.833 [2024-05-15 01:09:37.077706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.077866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.077893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.833 qpair failed and we were unable to recover it. 00:22:24.833 [2024-05-15 01:09:37.078094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.078311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.078336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.833 qpair failed and we were unable to recover it. 00:22:24.833 [2024-05-15 01:09:37.078522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.078699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.078724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.833 qpair failed and we were unable to recover it. 00:22:24.833 [2024-05-15 01:09:37.078878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.079039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.079065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.833 qpair failed and we were unable to recover it. 00:22:24.833 [2024-05-15 01:09:37.079259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.079440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.079464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.833 qpair failed and we were unable to recover it. 00:22:24.833 [2024-05-15 01:09:37.079680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.079840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.079864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.833 qpair failed and we were unable to recover it. 00:22:24.833 [2024-05-15 01:09:37.080035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.080188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.833 [2024-05-15 01:09:37.080212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.833 qpair failed and we were unable to recover it. 00:22:24.834 [2024-05-15 01:09:37.080368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.080583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.080607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.834 qpair failed and we were unable to recover it. 00:22:24.834 [2024-05-15 01:09:37.080764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.080934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.080959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.834 qpair failed and we were unable to recover it. 00:22:24.834 [2024-05-15 01:09:37.081149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.081337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.081362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.834 qpair failed and we were unable to recover it. 00:22:24.834 [2024-05-15 01:09:37.081513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.081702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.081728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.834 qpair failed and we were unable to recover it. 00:22:24.834 [2024-05-15 01:09:37.081915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.082071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.082096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.834 qpair failed and we were unable to recover it. 00:22:24.834 [2024-05-15 01:09:37.082309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.082470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.082497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.834 qpair failed and we were unable to recover it. 00:22:24.834 [2024-05-15 01:09:37.082655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.082857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.082881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.834 qpair failed and we were unable to recover it. 00:22:24.834 [2024-05-15 01:09:37.083047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.083206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.083229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.834 qpair failed and we were unable to recover it. 00:22:24.834 [2024-05-15 01:09:37.083400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.083581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.083605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.834 qpair failed and we were unable to recover it. 00:22:24.834 [2024-05-15 01:09:37.083796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.084009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.084036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.834 qpair failed and we were unable to recover it. 00:22:24.834 [2024-05-15 01:09:37.084226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.084386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.084410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.834 qpair failed and we were unable to recover it. 00:22:24.834 [2024-05-15 01:09:37.084608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.084786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.084812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.834 qpair failed and we were unable to recover it. 00:22:24.834 [2024-05-15 01:09:37.084981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.085145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.085170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.834 qpair failed and we were unable to recover it. 00:22:24.834 [2024-05-15 01:09:37.085361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.085542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.085567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.834 qpair failed and we were unable to recover it. 00:22:24.834 [2024-05-15 01:09:37.085723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.085878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.085904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.834 qpair failed and we were unable to recover it. 00:22:24.834 [2024-05-15 01:09:37.086110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.086295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.086321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.834 qpair failed and we were unable to recover it. 00:22:24.834 [2024-05-15 01:09:37.086488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.086674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.086700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.834 qpair failed and we were unable to recover it. 00:22:24.834 [2024-05-15 01:09:37.086878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.087041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.087067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.834 qpair failed and we were unable to recover it. 00:22:24.834 [2024-05-15 01:09:37.087223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.087412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.087439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.834 qpair failed and we were unable to recover it. 00:22:24.834 [2024-05-15 01:09:37.087626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.087808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.087833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.834 qpair failed and we were unable to recover it. 00:22:24.834 [2024-05-15 01:09:37.088008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.088221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.088245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.834 qpair failed and we were unable to recover it. 00:22:24.834 [2024-05-15 01:09:37.088431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.088613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.088639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.834 qpair failed and we were unable to recover it. 00:22:24.834 [2024-05-15 01:09:37.088800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.088991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.089018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.834 qpair failed and we were unable to recover it. 00:22:24.834 [2024-05-15 01:09:37.089215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.089385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.834 [2024-05-15 01:09:37.089412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.834 qpair failed and we were unable to recover it. 00:22:24.834 [2024-05-15 01:09:37.089576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.089746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.089771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.835 qpair failed and we were unable to recover it. 00:22:24.835 [2024-05-15 01:09:37.089957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.090151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.090176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.835 qpair failed and we were unable to recover it. 00:22:24.835 [2024-05-15 01:09:37.090328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.090482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.090507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.835 qpair failed and we were unable to recover it. 00:22:24.835 [2024-05-15 01:09:37.090685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.090842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.090868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.835 qpair failed and we were unable to recover it. 00:22:24.835 [2024-05-15 01:09:37.091052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.091223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.091248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.835 qpair failed and we were unable to recover it. 00:22:24.835 [2024-05-15 01:09:37.091405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.091594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.091619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.835 qpair failed and we were unable to recover it. 00:22:24.835 [2024-05-15 01:09:37.091806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.091997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.092022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.835 qpair failed and we were unable to recover it. 00:22:24.835 [2024-05-15 01:09:37.092208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.092370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.092398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.835 qpair failed and we were unable to recover it. 00:22:24.835 [2024-05-15 01:09:37.092574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.092755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.092780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.835 qpair failed and we were unable to recover it. 00:22:24.835 [2024-05-15 01:09:37.092962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.093142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.093168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.835 qpair failed and we were unable to recover it. 00:22:24.835 [2024-05-15 01:09:37.093353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.093539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.093565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.835 qpair failed and we were unable to recover it. 00:22:24.835 [2024-05-15 01:09:37.093723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.093879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.093904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.835 qpair failed and we were unable to recover it. 00:22:24.835 [2024-05-15 01:09:37.094107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.094301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.094325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.835 qpair failed and we were unable to recover it. 00:22:24.835 [2024-05-15 01:09:37.094484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.094641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.094668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.835 qpair failed and we were unable to recover it. 00:22:24.835 [2024-05-15 01:09:37.094851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.095036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.095062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.835 qpair failed and we were unable to recover it. 00:22:24.835 [2024-05-15 01:09:37.095219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.095429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.095454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.835 qpair failed and we were unable to recover it. 00:22:24.835 [2024-05-15 01:09:37.095618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.095798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.095822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.835 qpair failed and we were unable to recover it. 00:22:24.835 [2024-05-15 01:09:37.095982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.096135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.096160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.835 qpair failed and we were unable to recover it. 00:22:24.835 [2024-05-15 01:09:37.096339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.096514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.096542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.835 qpair failed and we were unable to recover it. 00:22:24.835 [2024-05-15 01:09:37.096726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.096905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.096938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.835 qpair failed and we were unable to recover it. 00:22:24.835 [2024-05-15 01:09:37.097113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.097281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.097307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.835 qpair failed and we were unable to recover it. 00:22:24.835 [2024-05-15 01:09:37.097517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.097668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.097693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.835 qpair failed and we were unable to recover it. 00:22:24.835 [2024-05-15 01:09:37.097876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.098042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.098069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.835 qpair failed and we were unable to recover it. 00:22:24.835 [2024-05-15 01:09:37.098238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.098400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.098425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.835 qpair failed and we were unable to recover it. 00:22:24.835 [2024-05-15 01:09:37.098584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.098763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.098787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.835 qpair failed and we were unable to recover it. 00:22:24.835 [2024-05-15 01:09:37.098947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.099113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.099137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.835 qpair failed and we were unable to recover it. 00:22:24.835 [2024-05-15 01:09:37.099300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.099480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.099504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.835 qpair failed and we were unable to recover it. 00:22:24.835 [2024-05-15 01:09:37.099654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.099835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.835 [2024-05-15 01:09:37.099860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.836 qpair failed and we were unable to recover it. 00:22:24.836 [2024-05-15 01:09:37.100028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.100222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.100247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.836 qpair failed and we were unable to recover it. 00:22:24.836 [2024-05-15 01:09:37.100441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.100609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.100637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.836 qpair failed and we were unable to recover it. 00:22:24.836 [2024-05-15 01:09:37.100803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.100995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.101020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.836 qpair failed and we were unable to recover it. 00:22:24.836 [2024-05-15 01:09:37.101199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.101381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.101405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.836 qpair failed and we were unable to recover it. 00:22:24.836 [2024-05-15 01:09:37.101589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.101801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.101825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.836 qpair failed and we were unable to recover it. 00:22:24.836 [2024-05-15 01:09:37.101985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.102170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.102195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.836 qpair failed and we were unable to recover it. 00:22:24.836 [2024-05-15 01:09:37.102408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.102566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.102593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.836 qpair failed and we were unable to recover it. 00:22:24.836 [2024-05-15 01:09:37.102762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.102951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.102977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.836 qpair failed and we were unable to recover it. 00:22:24.836 [2024-05-15 01:09:37.103164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.103319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.103344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.836 qpair failed and we were unable to recover it. 00:22:24.836 [2024-05-15 01:09:37.103526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.103709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.103733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.836 qpair failed and we were unable to recover it. 00:22:24.836 [2024-05-15 01:09:37.103893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.104073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.104103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.836 qpair failed and we were unable to recover it. 00:22:24.836 [2024-05-15 01:09:37.104282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.104436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.104460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.836 qpair failed and we were unable to recover it. 00:22:24.836 [2024-05-15 01:09:37.104622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.104773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.104798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.836 qpair failed and we were unable to recover it. 00:22:24.836 [2024-05-15 01:09:37.104983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.105141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.105166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.836 qpair failed and we were unable to recover it. 00:22:24.836 [2024-05-15 01:09:37.105330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.105490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.105517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.836 qpair failed and we were unable to recover it. 00:22:24.836 [2024-05-15 01:09:37.105707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.105864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.105891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.836 qpair failed and we were unable to recover it. 00:22:24.836 [2024-05-15 01:09:37.106092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.106295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.106320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.836 qpair failed and we were unable to recover it. 00:22:24.836 [2024-05-15 01:09:37.106480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.106628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.106653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.836 qpair failed and we were unable to recover it. 00:22:24.836 [2024-05-15 01:09:37.106849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.107023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.107049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.836 qpair failed and we were unable to recover it. 00:22:24.836 [2024-05-15 01:09:37.107229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.107407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.107432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.836 qpair failed and we were unable to recover it. 00:22:24.836 [2024-05-15 01:09:37.107616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.107798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.107823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.836 qpair failed and we were unable to recover it. 00:22:24.836 [2024-05-15 01:09:37.108009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.108189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.108214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.836 qpair failed and we were unable to recover it. 00:22:24.836 [2024-05-15 01:09:37.108390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.108542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.108567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.836 qpair failed and we were unable to recover it. 00:22:24.836 [2024-05-15 01:09:37.108724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.108882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.108908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.836 qpair failed and we were unable to recover it. 00:22:24.836 [2024-05-15 01:09:37.109073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.109239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.109263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.836 qpair failed and we were unable to recover it. 00:22:24.836 [2024-05-15 01:09:37.109418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.109572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.109597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.836 qpair failed and we were unable to recover it. 00:22:24.836 [2024-05-15 01:09:37.109751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.109968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.109993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.836 qpair failed and we were unable to recover it. 00:22:24.836 [2024-05-15 01:09:37.110156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.110313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.110338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.836 qpair failed and we were unable to recover it. 00:22:24.836 [2024-05-15 01:09:37.110502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.110661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.110688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.836 qpair failed and we were unable to recover it. 00:22:24.836 [2024-05-15 01:09:37.110875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.111030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.836 [2024-05-15 01:09:37.111056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.837 qpair failed and we were unable to recover it. 00:22:24.837 [2024-05-15 01:09:37.111226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.111403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.111428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.837 qpair failed and we were unable to recover it. 00:22:24.837 [2024-05-15 01:09:37.111581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.111807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.111832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.837 qpair failed and we were unable to recover it. 00:22:24.837 [2024-05-15 01:09:37.112010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.112198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.112222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.837 qpair failed and we were unable to recover it. 00:22:24.837 [2024-05-15 01:09:37.112398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.112577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.112601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.837 qpair failed and we were unable to recover it. 00:22:24.837 [2024-05-15 01:09:37.112790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.112949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.112974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.837 qpair failed and we were unable to recover it. 00:22:24.837 [2024-05-15 01:09:37.113159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.113313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.113337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.837 qpair failed and we were unable to recover it. 00:22:24.837 [2024-05-15 01:09:37.113501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.113682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.113706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.837 qpair failed and we were unable to recover it. 00:22:24.837 [2024-05-15 01:09:37.113863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.114031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.114056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.837 qpair failed and we were unable to recover it. 00:22:24.837 [2024-05-15 01:09:37.114216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.114373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.114398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.837 qpair failed and we were unable to recover it. 00:22:24.837 [2024-05-15 01:09:37.114574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.114741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.114766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.837 qpair failed and we were unable to recover it. 00:22:24.837 [2024-05-15 01:09:37.114919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.115112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.115136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.837 qpair failed and we were unable to recover it. 00:22:24.837 [2024-05-15 01:09:37.115310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.115495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.115519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.837 qpair failed and we were unable to recover it. 00:22:24.837 [2024-05-15 01:09:37.115706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.115862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.115888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.837 qpair failed and we were unable to recover it. 00:22:24.837 [2024-05-15 01:09:37.116084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.116247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.116272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.837 qpair failed and we were unable to recover it. 00:22:24.837 [2024-05-15 01:09:37.116446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.116603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.116628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.837 qpair failed and we were unable to recover it. 00:22:24.837 [2024-05-15 01:09:37.116787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.116975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.837 [2024-05-15 01:09:37.117001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.838 qpair failed and we were unable to recover it. 00:22:24.838 [2024-05-15 01:09:37.117165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.117325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.117350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.838 qpair failed and we were unable to recover it. 00:22:24.838 [2024-05-15 01:09:37.117523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.117703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.117728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.838 qpair failed and we were unable to recover it. 00:22:24.838 [2024-05-15 01:09:37.117878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.118057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.118082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.838 qpair failed and we were unable to recover it. 00:22:24.838 [2024-05-15 01:09:37.118236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.118420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.118445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.838 qpair failed and we were unable to recover it. 00:22:24.838 [2024-05-15 01:09:37.118602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.118763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.118787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.838 qpair failed and we were unable to recover it. 00:22:24.838 [2024-05-15 01:09:37.118978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.119196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.119221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.838 qpair failed and we were unable to recover it. 00:22:24.838 [2024-05-15 01:09:37.119384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.119564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.119589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.838 qpair failed and we were unable to recover it. 00:22:24.838 [2024-05-15 01:09:37.119755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.119935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.119961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.838 qpair failed and we were unable to recover it. 00:22:24.838 [2024-05-15 01:09:37.120142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.120292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.120317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.838 qpair failed and we were unable to recover it. 00:22:24.838 [2024-05-15 01:09:37.120475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.120652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.120677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.838 qpair failed and we were unable to recover it. 00:22:24.838 [2024-05-15 01:09:37.120892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.121055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.121082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.838 qpair failed and we were unable to recover it. 00:22:24.838 [2024-05-15 01:09:37.121238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.121401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.121425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.838 qpair failed and we were unable to recover it. 00:22:24.838 [2024-05-15 01:09:37.121614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.121773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.121797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.838 qpair failed and we were unable to recover it. 00:22:24.838 [2024-05-15 01:09:37.121982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.122146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.122171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.838 qpair failed and we were unable to recover it. 00:22:24.838 [2024-05-15 01:09:37.122351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.122508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.122533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.838 qpair failed and we were unable to recover it. 00:22:24.838 [2024-05-15 01:09:37.122693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.122869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.122898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.838 qpair failed and we were unable to recover it. 00:22:24.838 [2024-05-15 01:09:37.123088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.123237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.123261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.838 qpair failed and we were unable to recover it. 00:22:24.838 [2024-05-15 01:09:37.123419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.123600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.123624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.838 qpair failed and we were unable to recover it. 00:22:24.838 [2024-05-15 01:09:37.123787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.123977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.124004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.838 qpair failed and we were unable to recover it. 00:22:24.838 [2024-05-15 01:09:37.124164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.124323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.124347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.838 qpair failed and we were unable to recover it. 00:22:24.838 [2024-05-15 01:09:37.124525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.124703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.124727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.838 qpair failed and we were unable to recover it. 00:22:24.838 [2024-05-15 01:09:37.124888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.125048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.125073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.838 qpair failed and we were unable to recover it. 00:22:24.838 [2024-05-15 01:09:37.125226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.125406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.125431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.838 qpair failed and we were unable to recover it. 00:22:24.838 [2024-05-15 01:09:37.125611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.125770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.125794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.838 qpair failed and we were unable to recover it. 00:22:24.838 [2024-05-15 01:09:37.125953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.126120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.126144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.838 qpair failed and we were unable to recover it. 00:22:24.838 [2024-05-15 01:09:37.126324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.126475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.126503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.838 qpair failed and we were unable to recover it. 00:22:24.838 [2024-05-15 01:09:37.126667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.126845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.126869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.838 qpair failed and we were unable to recover it. 00:22:24.838 [2024-05-15 01:09:37.127022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.127178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.127204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.838 qpair failed and we were unable to recover it. 00:22:24.838 [2024-05-15 01:09:37.127413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.127597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.838 [2024-05-15 01:09:37.127622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.838 qpair failed and we were unable to recover it. 00:22:24.838 [2024-05-15 01:09:37.127778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.127955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.127980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.839 qpair failed and we were unable to recover it. 00:22:24.839 [2024-05-15 01:09:37.128151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.128313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.128339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.839 qpair failed and we were unable to recover it. 00:22:24.839 [2024-05-15 01:09:37.128498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.128689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.128714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.839 qpair failed and we were unable to recover it. 00:22:24.839 [2024-05-15 01:09:37.128899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.129079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.129104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.839 qpair failed and we were unable to recover it. 00:22:24.839 [2024-05-15 01:09:37.129320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.129499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.129524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.839 qpair failed and we were unable to recover it. 00:22:24.839 [2024-05-15 01:09:37.129703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.129887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.129912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.839 qpair failed and we were unable to recover it. 00:22:24.839 [2024-05-15 01:09:37.130108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.130278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.130303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.839 qpair failed and we were unable to recover it. 00:22:24.839 [2024-05-15 01:09:37.130477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.130658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.130683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.839 qpair failed and we were unable to recover it. 00:22:24.839 [2024-05-15 01:09:37.130870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.131052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.131078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.839 qpair failed and we were unable to recover it. 00:22:24.839 [2024-05-15 01:09:37.131264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.131439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.131462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.839 qpair failed and we were unable to recover it. 00:22:24.839 [2024-05-15 01:09:37.131655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.131871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.131896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.839 qpair failed and we were unable to recover it. 00:22:24.839 [2024-05-15 01:09:37.132090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.132279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.132303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.839 qpair failed and we were unable to recover it. 00:22:24.839 [2024-05-15 01:09:37.132461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.132624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.132651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.839 qpair failed and we were unable to recover it. 00:22:24.839 [2024-05-15 01:09:37.132834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.133020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.133045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.839 qpair failed and we were unable to recover it. 00:22:24.839 [2024-05-15 01:09:37.133203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.133393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.133418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.839 qpair failed and we were unable to recover it. 00:22:24.839 [2024-05-15 01:09:37.133578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.133786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.133810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.839 qpair failed and we were unable to recover it. 00:22:24.839 [2024-05-15 01:09:37.133970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.134127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.134151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.839 qpair failed and we were unable to recover it. 00:22:24.839 [2024-05-15 01:09:37.134314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.134521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.134546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.839 qpair failed and we were unable to recover it. 00:22:24.839 [2024-05-15 01:09:37.134734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.134890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.134914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.839 qpair failed and we were unable to recover it. 00:22:24.839 [2024-05-15 01:09:37.135140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.135322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.135345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.839 qpair failed and we were unable to recover it. 00:22:24.839 [2024-05-15 01:09:37.135530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.135681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.135705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.839 qpair failed and we were unable to recover it. 00:22:24.839 [2024-05-15 01:09:37.135868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.136026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.136052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.839 qpair failed and we were unable to recover it. 00:22:24.839 [2024-05-15 01:09:37.136215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.136367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.136391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.839 qpair failed and we were unable to recover it. 00:22:24.839 [2024-05-15 01:09:37.136573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.136754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.136778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.839 qpair failed and we were unable to recover it. 00:22:24.839 [2024-05-15 01:09:37.136955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.137141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.137165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.839 qpair failed and we were unable to recover it. 00:22:24.839 [2024-05-15 01:09:37.137320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.137480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.137504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.839 qpair failed and we were unable to recover it. 00:22:24.839 [2024-05-15 01:09:37.137682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.137864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.137888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.839 qpair failed and we were unable to recover it. 00:22:24.839 [2024-05-15 01:09:37.138060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.138223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.138249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.839 qpair failed and we were unable to recover it. 00:22:24.839 [2024-05-15 01:09:37.138412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.138586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.839 [2024-05-15 01:09:37.138610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.839 qpair failed and we were unable to recover it. 00:22:24.839 [2024-05-15 01:09:37.138769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.138957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.138982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.840 qpair failed and we were unable to recover it. 00:22:24.840 [2024-05-15 01:09:37.139173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.139331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.139355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.840 qpair failed and we were unable to recover it. 00:22:24.840 [2024-05-15 01:09:37.139543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.139729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.139753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.840 qpair failed and we were unable to recover it. 00:22:24.840 [2024-05-15 01:09:37.139945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.140135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.140159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.840 qpair failed and we were unable to recover it. 00:22:24.840 [2024-05-15 01:09:37.140346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.140530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.140554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.840 qpair failed and we were unable to recover it. 00:22:24.840 [2024-05-15 01:09:37.140714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.140903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.140928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.840 qpair failed and we were unable to recover it. 00:22:24.840 [2024-05-15 01:09:37.141147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.141302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.141327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.840 qpair failed and we were unable to recover it. 00:22:24.840 [2024-05-15 01:09:37.141521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.141713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.141736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.840 qpair failed and we were unable to recover it. 00:22:24.840 [2024-05-15 01:09:37.141900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.142073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.142098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.840 qpair failed and we were unable to recover it. 00:22:24.840 [2024-05-15 01:09:37.142254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.142410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.142434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.840 qpair failed and we were unable to recover it. 00:22:24.840 [2024-05-15 01:09:37.142619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.142825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.142848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.840 qpair failed and we were unable to recover it. 00:22:24.840 [2024-05-15 01:09:37.143009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.143190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.143214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.840 qpair failed and we were unable to recover it. 00:22:24.840 [2024-05-15 01:09:37.143398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.143550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.143574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.840 qpair failed and we were unable to recover it. 00:22:24.840 [2024-05-15 01:09:37.143730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.143920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.143952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.840 qpair failed and we were unable to recover it. 00:22:24.840 [2024-05-15 01:09:37.144120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.144295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.144319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.840 qpair failed and we were unable to recover it. 00:22:24.840 [2024-05-15 01:09:37.144471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.144631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.144655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.840 qpair failed and we were unable to recover it. 00:22:24.840 [2024-05-15 01:09:37.144849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.145007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.145032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.840 qpair failed and we were unable to recover it. 00:22:24.840 [2024-05-15 01:09:37.145190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.145380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.145404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.840 qpair failed and we were unable to recover it. 00:22:24.840 [2024-05-15 01:09:37.145574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.145731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.145761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.840 qpair failed and we were unable to recover it. 00:22:24.840 [2024-05-15 01:09:37.145974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.146128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.146153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.840 qpair failed and we were unable to recover it. 00:22:24.840 [2024-05-15 01:09:37.146332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.146515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.146540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.840 qpair failed and we were unable to recover it. 00:22:24.840 [2024-05-15 01:09:37.146724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.146877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.146901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.840 qpair failed and we were unable to recover it. 00:22:24.840 [2024-05-15 01:09:37.147061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.147243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.147267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.840 qpair failed and we were unable to recover it. 00:22:24.840 [2024-05-15 01:09:37.147457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.147615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.147638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.840 qpair failed and we were unable to recover it. 00:22:24.840 [2024-05-15 01:09:37.147826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.148017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.148042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.840 qpair failed and we were unable to recover it. 00:22:24.840 [2024-05-15 01:09:37.148258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.148421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.148447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.840 qpair failed and we were unable to recover it. 00:22:24.840 [2024-05-15 01:09:37.148606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.148804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.148828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.840 qpair failed and we were unable to recover it. 00:22:24.840 [2024-05-15 01:09:37.148989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.149170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.149194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.840 qpair failed and we were unable to recover it. 00:22:24.840 [2024-05-15 01:09:37.149381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.149540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.149565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.840 qpair failed and we were unable to recover it. 00:22:24.840 [2024-05-15 01:09:37.149731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.149924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.840 [2024-05-15 01:09:37.149954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.840 qpair failed and we were unable to recover it. 00:22:24.840 [2024-05-15 01:09:37.150114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.150270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.150296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.841 qpair failed and we were unable to recover it. 00:22:24.841 [2024-05-15 01:09:37.150457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.150647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.150671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.841 qpair failed and we were unable to recover it. 00:22:24.841 [2024-05-15 01:09:37.150837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.150991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.151019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.841 qpair failed and we were unable to recover it. 00:22:24.841 [2024-05-15 01:09:37.151184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.151339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.151364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.841 qpair failed and we were unable to recover it. 00:22:24.841 [2024-05-15 01:09:37.151523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.151713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.151739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.841 qpair failed and we were unable to recover it. 00:22:24.841 [2024-05-15 01:09:37.151900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.152057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.152082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.841 qpair failed and we were unable to recover it. 00:22:24.841 [2024-05-15 01:09:37.152268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.152447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.152471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.841 qpair failed and we were unable to recover it. 00:22:24.841 [2024-05-15 01:09:37.152677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.152834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.152858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.841 qpair failed and we were unable to recover it. 00:22:24.841 [2024-05-15 01:09:37.153035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.153218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.153243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.841 qpair failed and we were unable to recover it. 00:22:24.841 [2024-05-15 01:09:37.153428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.153620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.153644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.841 qpair failed and we were unable to recover it. 00:22:24.841 [2024-05-15 01:09:37.153809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.153975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.154000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.841 qpair failed and we were unable to recover it. 00:22:24.841 [2024-05-15 01:09:37.154163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.154320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.154345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.841 qpair failed and we were unable to recover it. 00:22:24.841 [2024-05-15 01:09:37.154508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.154673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.154701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.841 qpair failed and we were unable to recover it. 00:22:24.841 [2024-05-15 01:09:37.154879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.155040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.155064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.841 qpair failed and we were unable to recover it. 00:22:24.841 [2024-05-15 01:09:37.155227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.155425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.155449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.841 qpair failed and we were unable to recover it. 00:22:24.841 [2024-05-15 01:09:37.155606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.155764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.155788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.841 qpair failed and we were unable to recover it. 00:22:24.841 [2024-05-15 01:09:37.155978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.156138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.156163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.841 qpair failed and we were unable to recover it. 00:22:24.841 [2024-05-15 01:09:37.156332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.156521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.156548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.841 qpair failed and we were unable to recover it. 00:22:24.841 [2024-05-15 01:09:37.156719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.156889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.156916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.841 qpair failed and we were unable to recover it. 00:22:24.841 [2024-05-15 01:09:37.157111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.157276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.157301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.841 qpair failed and we were unable to recover it. 00:22:24.841 [2024-05-15 01:09:37.157461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.157629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.157656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.841 qpair failed and we were unable to recover it. 00:22:24.841 [2024-05-15 01:09:37.157815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.157988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.158014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.841 qpair failed and we were unable to recover it. 00:22:24.841 [2024-05-15 01:09:37.158191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.158371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.158395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.841 qpair failed and we were unable to recover it. 00:22:24.841 [2024-05-15 01:09:37.158572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.158751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.158777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.841 qpair failed and we were unable to recover it. 00:22:24.841 [2024-05-15 01:09:37.158952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.159137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.159161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.841 qpair failed and we were unable to recover it. 00:22:24.841 [2024-05-15 01:09:37.159348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.159512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.159539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.841 qpair failed and we were unable to recover it. 00:22:24.841 [2024-05-15 01:09:37.159702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.159884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.159908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:24.841 qpair failed and we were unable to recover it. 00:22:24.841 [2024-05-15 01:09:37.160048] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e720b0 is same with the state(5) to be set 00:22:24.841 [2024-05-15 01:09:37.160282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.160467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.160498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.841 qpair failed and we were unable to recover it. 00:22:24.841 [2024-05-15 01:09:37.160661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.160847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.160872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.841 qpair failed and we were unable to recover it. 00:22:24.841 [2024-05-15 01:09:37.161067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.161227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.841 [2024-05-15 01:09:37.161252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.842 qpair failed and we were unable to recover it. 00:22:24.842 [2024-05-15 01:09:37.161433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.161622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.161646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.842 qpair failed and we were unable to recover it. 00:22:24.842 [2024-05-15 01:09:37.161811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.162000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.162027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.842 qpair failed and we were unable to recover it. 00:22:24.842 [2024-05-15 01:09:37.162189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.162380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.162406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.842 qpair failed and we were unable to recover it. 00:22:24.842 [2024-05-15 01:09:37.162565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.162723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.162747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.842 qpair failed and we were unable to recover it. 00:22:24.842 [2024-05-15 01:09:37.162914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.163070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.163096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.842 qpair failed and we were unable to recover it. 00:22:24.842 [2024-05-15 01:09:37.163259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.163417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.163443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.842 qpair failed and we were unable to recover it. 00:22:24.842 [2024-05-15 01:09:37.163597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.163780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.163805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.842 qpair failed and we were unable to recover it. 00:22:24.842 [2024-05-15 01:09:37.163998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.164159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.164184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.842 qpair failed and we were unable to recover it. 00:22:24.842 [2024-05-15 01:09:37.164377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.164531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.164555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.842 qpair failed and we were unable to recover it. 00:22:24.842 [2024-05-15 01:09:37.164749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.164911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.164946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.842 qpair failed and we were unable to recover it. 00:22:24.842 [2024-05-15 01:09:37.165131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.165281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.165306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.842 qpair failed and we were unable to recover it. 00:22:24.842 [2024-05-15 01:09:37.165499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.165649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.165674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.842 qpair failed and we were unable to recover it. 00:22:24.842 [2024-05-15 01:09:37.165851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.166030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.166056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.842 qpair failed and we were unable to recover it. 00:22:24.842 [2024-05-15 01:09:37.166206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.166400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.166424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.842 qpair failed and we were unable to recover it. 00:22:24.842 [2024-05-15 01:09:37.166599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.166781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.166806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.842 qpair failed and we were unable to recover it. 00:22:24.842 [2024-05-15 01:09:37.166999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.167153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.167178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.842 qpair failed and we were unable to recover it. 00:22:24.842 [2024-05-15 01:09:37.167381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.167543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.167570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.842 qpair failed and we were unable to recover it. 00:22:24.842 [2024-05-15 01:09:37.167752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.167910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.167941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.842 qpair failed and we were unable to recover it. 00:22:24.842 [2024-05-15 01:09:37.168103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.168265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.168290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.842 qpair failed and we were unable to recover it. 00:22:24.842 [2024-05-15 01:09:37.168475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.168654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.168678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.842 qpair failed and we were unable to recover it. 00:22:24.842 [2024-05-15 01:09:37.168839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.169019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.169047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.842 qpair failed and we were unable to recover it. 00:22:24.842 [2024-05-15 01:09:37.169229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.169419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.169445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.842 qpair failed and we were unable to recover it. 00:22:24.842 [2024-05-15 01:09:37.169624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.169803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.169828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.842 qpair failed and we were unable to recover it. 00:22:24.842 [2024-05-15 01:09:37.169989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.170145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.842 [2024-05-15 01:09:37.170171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.842 qpair failed and we were unable to recover it. 00:22:24.842 [2024-05-15 01:09:37.170332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.170530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.170554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.843 qpair failed and we were unable to recover it. 00:22:24.843 [2024-05-15 01:09:37.170713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.170876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.170901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.843 qpair failed and we were unable to recover it. 00:22:24.843 [2024-05-15 01:09:37.171085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.171263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.171287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.843 qpair failed and we were unable to recover it. 00:22:24.843 [2024-05-15 01:09:37.171445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.171612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.171639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.843 qpair failed and we were unable to recover it. 00:22:24.843 [2024-05-15 01:09:37.171806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.171968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.171995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.843 qpair failed and we were unable to recover it. 00:22:24.843 [2024-05-15 01:09:37.172162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.172350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.172376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.843 qpair failed and we were unable to recover it. 00:22:24.843 [2024-05-15 01:09:37.172586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.172812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.172837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.843 qpair failed and we were unable to recover it. 00:22:24.843 [2024-05-15 01:09:37.173022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.173174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.173200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.843 qpair failed and we were unable to recover it. 00:22:24.843 [2024-05-15 01:09:37.173360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.173530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.173555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.843 qpair failed and we were unable to recover it. 00:22:24.843 [2024-05-15 01:09:37.173740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.173926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.173963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.843 qpair failed and we were unable to recover it. 00:22:24.843 [2024-05-15 01:09:37.174155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.174322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.174347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.843 qpair failed and we were unable to recover it. 00:22:24.843 [2024-05-15 01:09:37.174505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.174665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.174690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.843 qpair failed and we were unable to recover it. 00:22:24.843 [2024-05-15 01:09:37.174852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.175013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.175038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.843 qpair failed and we were unable to recover it. 00:22:24.843 [2024-05-15 01:09:37.175190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.175377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.175401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.843 qpair failed and we were unable to recover it. 00:22:24.843 [2024-05-15 01:09:37.175581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.175743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.175767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.843 qpair failed and we were unable to recover it. 00:22:24.843 [2024-05-15 01:09:37.175966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.176145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.176171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.843 qpair failed and we were unable to recover it. 00:22:24.843 [2024-05-15 01:09:37.176353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.176518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.176544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.843 qpair failed and we were unable to recover it. 00:22:24.843 [2024-05-15 01:09:37.176738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.176937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.176963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.843 qpair failed and we were unable to recover it. 00:22:24.843 [2024-05-15 01:09:37.177121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.177274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.177299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.843 qpair failed and we were unable to recover it. 00:22:24.843 [2024-05-15 01:09:37.177459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.177619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.177644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.843 qpair failed and we were unable to recover it. 00:22:24.843 [2024-05-15 01:09:37.177798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.177955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.177980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.843 qpair failed and we were unable to recover it. 00:22:24.843 [2024-05-15 01:09:37.178138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.178337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.178361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.843 qpair failed and we were unable to recover it. 00:22:24.843 [2024-05-15 01:09:37.178521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.178677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.178702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.843 qpair failed and we were unable to recover it. 00:22:24.843 [2024-05-15 01:09:37.178870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.179069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.179095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.843 qpair failed and we were unable to recover it. 00:22:24.843 [2024-05-15 01:09:37.179280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.179469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.179494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.843 qpair failed and we were unable to recover it. 00:22:24.843 [2024-05-15 01:09:37.179648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.179805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.179830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.843 qpair failed and we were unable to recover it. 00:22:24.843 [2024-05-15 01:09:37.179989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.180142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.180167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.843 qpair failed and we were unable to recover it. 00:22:24.843 [2024-05-15 01:09:37.180339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.180525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.180549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.843 qpair failed and we were unable to recover it. 00:22:24.843 [2024-05-15 01:09:37.180740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.180896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.180922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.843 qpair failed and we were unable to recover it. 00:22:24.843 [2024-05-15 01:09:37.181088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.181246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.843 [2024-05-15 01:09:37.181271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.844 qpair failed and we were unable to recover it. 00:22:24.844 [2024-05-15 01:09:37.181433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.844 [2024-05-15 01:09:37.181593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.844 [2024-05-15 01:09:37.181619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.844 qpair failed and we were unable to recover it. 00:22:24.844 [2024-05-15 01:09:37.181820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.844 [2024-05-15 01:09:37.181983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.844 [2024-05-15 01:09:37.182007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.844 qpair failed and we were unable to recover it. 00:22:24.844 [2024-05-15 01:09:37.182174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.844 [2024-05-15 01:09:37.182353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.844 [2024-05-15 01:09:37.182378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.844 qpair failed and we were unable to recover it. 00:22:24.844 [2024-05-15 01:09:37.182538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.844 [2024-05-15 01:09:37.182700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.844 [2024-05-15 01:09:37.182725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:24.844 qpair failed and we were unable to recover it. 00:22:25.116 [2024-05-15 01:09:37.182914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.116 [2024-05-15 01:09:37.183087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.116 [2024-05-15 01:09:37.183113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.116 qpair failed and we were unable to recover it. 00:22:25.116 [2024-05-15 01:09:37.183302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.116 [2024-05-15 01:09:37.183473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.116 [2024-05-15 01:09:37.183498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.116 qpair failed and we were unable to recover it. 00:22:25.116 [2024-05-15 01:09:37.183669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.116 [2024-05-15 01:09:37.183862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.116 [2024-05-15 01:09:37.183886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.116 qpair failed and we were unable to recover it. 00:22:25.116 [2024-05-15 01:09:37.184077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.116 [2024-05-15 01:09:37.184235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.116 [2024-05-15 01:09:37.184261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.116 qpair failed and we were unable to recover it. 00:22:25.116 [2024-05-15 01:09:37.184430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.116 [2024-05-15 01:09:37.184595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.116 [2024-05-15 01:09:37.184620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.116 qpair failed and we were unable to recover it. 00:22:25.116 [2024-05-15 01:09:37.184779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.116 [2024-05-15 01:09:37.184950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.116 [2024-05-15 01:09:37.184976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.116 qpair failed and we were unable to recover it. 00:22:25.116 [2024-05-15 01:09:37.185168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.116 [2024-05-15 01:09:37.185364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.116 [2024-05-15 01:09:37.185389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.116 qpair failed and we were unable to recover it. 00:22:25.116 [2024-05-15 01:09:37.185575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.116 [2024-05-15 01:09:37.185741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.116 [2024-05-15 01:09:37.185766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.116 qpair failed and we were unable to recover it. 00:22:25.117 [2024-05-15 01:09:37.185957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.186118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.186142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.117 qpair failed and we were unable to recover it. 00:22:25.117 [2024-05-15 01:09:37.186318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.186509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.186533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.117 qpair failed and we were unable to recover it. 00:22:25.117 [2024-05-15 01:09:37.186696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.186858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.186882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.117 qpair failed and we were unable to recover it. 00:22:25.117 [2024-05-15 01:09:37.187080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.187240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.187266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.117 qpair failed and we were unable to recover it. 00:22:25.117 [2024-05-15 01:09:37.187427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.187615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.187639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.117 qpair failed and we were unable to recover it. 00:22:25.117 [2024-05-15 01:09:37.187822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.187991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.188016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.117 qpair failed and we were unable to recover it. 00:22:25.117 [2024-05-15 01:09:37.188180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.188366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.188390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.117 qpair failed and we were unable to recover it. 00:22:25.117 [2024-05-15 01:09:37.188551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.188709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.188734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.117 qpair failed and we were unable to recover it. 00:22:25.117 [2024-05-15 01:09:37.188925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.189083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.189107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.117 qpair failed and we were unable to recover it. 00:22:25.117 [2024-05-15 01:09:37.189262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.189427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.189452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.117 qpair failed and we were unable to recover it. 00:22:25.117 [2024-05-15 01:09:37.189625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.189780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.189806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.117 qpair failed and we were unable to recover it. 00:22:25.117 [2024-05-15 01:09:37.189966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.190158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.190182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.117 qpair failed and we were unable to recover it. 00:22:25.117 [2024-05-15 01:09:37.190357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.190551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.190575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.117 qpair failed and we were unable to recover it. 00:22:25.117 [2024-05-15 01:09:37.190764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.190953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.190983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.117 qpair failed and we were unable to recover it. 00:22:25.117 [2024-05-15 01:09:37.191137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.191300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.191326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.117 qpair failed and we were unable to recover it. 00:22:25.117 [2024-05-15 01:09:37.191525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.191742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.191767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.117 qpair failed and we were unable to recover it. 00:22:25.117 [2024-05-15 01:09:37.191927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.192106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.192131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.117 qpair failed and we were unable to recover it. 00:22:25.117 [2024-05-15 01:09:37.192316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.192477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.192502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.117 qpair failed and we were unable to recover it. 00:22:25.117 [2024-05-15 01:09:37.192691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.192899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.192924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.117 qpair failed and we were unable to recover it. 00:22:25.117 [2024-05-15 01:09:37.193090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.193279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.193304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.117 qpair failed and we were unable to recover it. 00:22:25.117 [2024-05-15 01:09:37.193496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.193654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.193677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.117 qpair failed and we were unable to recover it. 00:22:25.117 [2024-05-15 01:09:37.193840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.194021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.194047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.117 qpair failed and we were unable to recover it. 00:22:25.117 [2024-05-15 01:09:37.194210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.194385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.194409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.117 qpair failed and we were unable to recover it. 00:22:25.117 [2024-05-15 01:09:37.194584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.194745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.194777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.117 qpair failed and we were unable to recover it. 00:22:25.117 [2024-05-15 01:09:37.194970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.117 [2024-05-15 01:09:37.195178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.195203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.118 qpair failed and we were unable to recover it. 00:22:25.118 [2024-05-15 01:09:37.195373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.195524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.195548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.118 qpair failed and we were unable to recover it. 00:22:25.118 [2024-05-15 01:09:37.195707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.195882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.195907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.118 qpair failed and we were unable to recover it. 00:22:25.118 [2024-05-15 01:09:37.196100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.196280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.196305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.118 qpair failed and we were unable to recover it. 00:22:25.118 [2024-05-15 01:09:37.196459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.196667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.196693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.118 qpair failed and we were unable to recover it. 00:22:25.118 [2024-05-15 01:09:37.196868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.197026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.197051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.118 qpair failed and we were unable to recover it. 00:22:25.118 [2024-05-15 01:09:37.197229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.197389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.197414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.118 qpair failed and we were unable to recover it. 00:22:25.118 [2024-05-15 01:09:37.197593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.197781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.197807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.118 qpair failed and we were unable to recover it. 00:22:25.118 [2024-05-15 01:09:37.197965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.198128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.198152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.118 qpair failed and we were unable to recover it. 00:22:25.118 [2024-05-15 01:09:37.198311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.198478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.198508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.118 qpair failed and we were unable to recover it. 00:22:25.118 [2024-05-15 01:09:37.198663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.198849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.198873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.118 qpair failed and we were unable to recover it. 00:22:25.118 [2024-05-15 01:09:37.199031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.199190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.199216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.118 qpair failed and we were unable to recover it. 00:22:25.118 [2024-05-15 01:09:37.199401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.199615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.199639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.118 qpair failed and we were unable to recover it. 00:22:25.118 [2024-05-15 01:09:37.199801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.199966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.199991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.118 qpair failed and we were unable to recover it. 00:22:25.118 [2024-05-15 01:09:37.200164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.200318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.200343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.118 qpair failed and we were unable to recover it. 00:22:25.118 [2024-05-15 01:09:37.200526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.200677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.200702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.118 qpair failed and we were unable to recover it. 00:22:25.118 [2024-05-15 01:09:37.200873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.201031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.201057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.118 qpair failed and we were unable to recover it. 00:22:25.118 [2024-05-15 01:09:37.201224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.201382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.201407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.118 qpair failed and we were unable to recover it. 00:22:25.118 [2024-05-15 01:09:37.201592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.201774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.201799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.118 qpair failed and we were unable to recover it. 00:22:25.118 [2024-05-15 01:09:37.201982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.202164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.202193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.118 qpair failed and we were unable to recover it. 00:22:25.118 [2024-05-15 01:09:37.202390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.202559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.202584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.118 qpair failed and we were unable to recover it. 00:22:25.118 [2024-05-15 01:09:37.202778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.202940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.202965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.118 qpair failed and we were unable to recover it. 00:22:25.118 [2024-05-15 01:09:37.203129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.203295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.203319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.118 qpair failed and we were unable to recover it. 00:22:25.118 [2024-05-15 01:09:37.203492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.118 [2024-05-15 01:09:37.203658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.203684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.119 qpair failed and we were unable to recover it. 00:22:25.119 [2024-05-15 01:09:37.203878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.204044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.204069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.119 qpair failed and we were unable to recover it. 00:22:25.119 [2024-05-15 01:09:37.204251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.204416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.204441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.119 qpair failed and we were unable to recover it. 00:22:25.119 [2024-05-15 01:09:37.204602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.204766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.204793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.119 qpair failed and we were unable to recover it. 00:22:25.119 [2024-05-15 01:09:37.204968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.205124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.205148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.119 qpair failed and we were unable to recover it. 00:22:25.119 [2024-05-15 01:09:37.205303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.205482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.205506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.119 qpair failed and we were unable to recover it. 00:22:25.119 [2024-05-15 01:09:37.205654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.205839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.205863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.119 qpair failed and we were unable to recover it. 00:22:25.119 [2024-05-15 01:09:37.206045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.206226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.206251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.119 qpair failed and we were unable to recover it. 00:22:25.119 [2024-05-15 01:09:37.206439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.206599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.206623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.119 qpair failed and we were unable to recover it. 00:22:25.119 [2024-05-15 01:09:37.206799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.206956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.206982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.119 qpair failed and we were unable to recover it. 00:22:25.119 [2024-05-15 01:09:37.207149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.207330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.207355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.119 qpair failed and we were unable to recover it. 00:22:25.119 [2024-05-15 01:09:37.207512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.207664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.207688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.119 qpair failed and we were unable to recover it. 00:22:25.119 [2024-05-15 01:09:37.207874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.208038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.208064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.119 qpair failed and we were unable to recover it. 00:22:25.119 [2024-05-15 01:09:37.208222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.208377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.208400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.119 qpair failed and we were unable to recover it. 00:22:25.119 [2024-05-15 01:09:37.208608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.208756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.208781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.119 qpair failed and we were unable to recover it. 00:22:25.119 [2024-05-15 01:09:37.208944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.209132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.209156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.119 qpair failed and we were unable to recover it. 00:22:25.119 [2024-05-15 01:09:37.209318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.209530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.209554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.119 qpair failed and we were unable to recover it. 00:22:25.119 [2024-05-15 01:09:37.209723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.209887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.209912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.119 qpair failed and we were unable to recover it. 00:22:25.119 [2024-05-15 01:09:37.210080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.210235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.210259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.119 qpair failed and we were unable to recover it. 00:22:25.119 [2024-05-15 01:09:37.210414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.210589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.210613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.119 qpair failed and we were unable to recover it. 00:22:25.119 [2024-05-15 01:09:37.213091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.213312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.213342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.119 qpair failed and we were unable to recover it. 00:22:25.119 [2024-05-15 01:09:37.213528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.213690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.213716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.119 qpair failed and we were unable to recover it. 00:22:25.119 [2024-05-15 01:09:37.213868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.214039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.214065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.119 qpair failed and we were unable to recover it. 00:22:25.119 [2024-05-15 01:09:37.214224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.214412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.214438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.119 qpair failed and we were unable to recover it. 00:22:25.119 [2024-05-15 01:09:37.214632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.214791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.214827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.119 qpair failed and we were unable to recover it. 00:22:25.119 [2024-05-15 01:09:37.215039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.119 [2024-05-15 01:09:37.215220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.215245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.120 qpair failed and we were unable to recover it. 00:22:25.120 [2024-05-15 01:09:37.215423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.215584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.215608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.120 qpair failed and we were unable to recover it. 00:22:25.120 [2024-05-15 01:09:37.215795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.215953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.215978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.120 qpair failed and we were unable to recover it. 00:22:25.120 [2024-05-15 01:09:37.216146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.216320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.216344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.120 qpair failed and we were unable to recover it. 00:22:25.120 [2024-05-15 01:09:37.216527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.216688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.216712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.120 qpair failed and we were unable to recover it. 00:22:25.120 [2024-05-15 01:09:37.216877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.217059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.217084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.120 qpair failed and we were unable to recover it. 00:22:25.120 [2024-05-15 01:09:37.217267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.217428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.217452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.120 qpair failed and we were unable to recover it. 00:22:25.120 [2024-05-15 01:09:37.217639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.217801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.217826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.120 qpair failed and we were unable to recover it. 00:22:25.120 [2024-05-15 01:09:37.217987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.218155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.218180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.120 qpair failed and we were unable to recover it. 00:22:25.120 [2024-05-15 01:09:37.218372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.218584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.218608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.120 qpair failed and we were unable to recover it. 00:22:25.120 [2024-05-15 01:09:37.218765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.218935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.218961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.120 qpair failed and we were unable to recover it. 00:22:25.120 [2024-05-15 01:09:37.219129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.219341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.219365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.120 qpair failed and we were unable to recover it. 00:22:25.120 [2024-05-15 01:09:37.219524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.219706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.219730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.120 qpair failed and we were unable to recover it. 00:22:25.120 [2024-05-15 01:09:37.219911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.220088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.220115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.120 qpair failed and we were unable to recover it. 00:22:25.120 [2024-05-15 01:09:37.220275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.220427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.220451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.120 qpair failed and we were unable to recover it. 00:22:25.120 [2024-05-15 01:09:37.220608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.220794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.220819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.120 qpair failed and we were unable to recover it. 00:22:25.120 [2024-05-15 01:09:37.220982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.221165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.221189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.120 qpair failed and we were unable to recover it. 00:22:25.120 [2024-05-15 01:09:37.221347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.221524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.221548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.120 qpair failed and we were unable to recover it. 00:22:25.120 [2024-05-15 01:09:37.221753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.221935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.221959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.120 qpair failed and we were unable to recover it. 00:22:25.120 [2024-05-15 01:09:37.222133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.222361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.222387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.120 qpair failed and we were unable to recover it. 00:22:25.120 [2024-05-15 01:09:37.222573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.222723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.222746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.120 qpair failed and we were unable to recover it. 00:22:25.120 [2024-05-15 01:09:37.222904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.223068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.223093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.120 qpair failed and we were unable to recover it. 00:22:25.120 [2024-05-15 01:09:37.223277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.223456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.223481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.120 qpair failed and we were unable to recover it. 00:22:25.120 [2024-05-15 01:09:37.223643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.223834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.120 [2024-05-15 01:09:37.223858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.120 qpair failed and we were unable to recover it. 00:22:25.121 [2024-05-15 01:09:37.224042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.224206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.224233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.121 qpair failed and we were unable to recover it. 00:22:25.121 [2024-05-15 01:09:37.224392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.224543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.224567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.121 qpair failed and we were unable to recover it. 00:22:25.121 [2024-05-15 01:09:37.224727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.224921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.224955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.121 qpair failed and we were unable to recover it. 00:22:25.121 [2024-05-15 01:09:37.225137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.225305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.225330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.121 qpair failed and we were unable to recover it. 00:22:25.121 [2024-05-15 01:09:37.225483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.225648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.225672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.121 qpair failed and we were unable to recover it. 00:22:25.121 [2024-05-15 01:09:37.225846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.226026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.226052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.121 qpair failed and we were unable to recover it. 00:22:25.121 [2024-05-15 01:09:37.226260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.226414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.226438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.121 qpair failed and we were unable to recover it. 00:22:25.121 [2024-05-15 01:09:37.226594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.226750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.226775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.121 qpair failed and we were unable to recover it. 00:22:25.121 [2024-05-15 01:09:37.226945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.227096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.227121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.121 qpair failed and we were unable to recover it. 00:22:25.121 [2024-05-15 01:09:37.227299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.227451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.227476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.121 qpair failed and we were unable to recover it. 00:22:25.121 [2024-05-15 01:09:37.227650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.227805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.227832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.121 qpair failed and we were unable to recover it. 00:22:25.121 [2024-05-15 01:09:37.228017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.228187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.228214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.121 qpair failed and we were unable to recover it. 00:22:25.121 [2024-05-15 01:09:37.228426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.228604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.228629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.121 qpair failed and we were unable to recover it. 00:22:25.121 [2024-05-15 01:09:37.228824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.229015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.229041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.121 qpair failed and we were unable to recover it. 00:22:25.121 [2024-05-15 01:09:37.229228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.229410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.229435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.121 qpair failed and we were unable to recover it. 00:22:25.121 [2024-05-15 01:09:37.229622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.229846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.229871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.121 qpair failed and we were unable to recover it. 00:22:25.121 [2024-05-15 01:09:37.230055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.230220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.230244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.121 qpair failed and we were unable to recover it. 00:22:25.121 [2024-05-15 01:09:37.230427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.230642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.230666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63f8000b90 with addr=10.0.0.2, port=4420 00:22:25.121 qpair failed and we were unable to recover it. 00:22:25.121 [2024-05-15 01:09:37.230849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.231029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.231059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.121 qpair failed and we were unable to recover it. 00:22:25.121 [2024-05-15 01:09:37.231221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.231411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.231436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.121 qpair failed and we were unable to recover it. 00:22:25.121 [2024-05-15 01:09:37.231595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.231760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.231784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.121 qpair failed and we were unable to recover it. 00:22:25.121 [2024-05-15 01:09:37.231956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.232137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.232163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.121 qpair failed and we were unable to recover it. 00:22:25.121 [2024-05-15 01:09:37.232348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.232538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.232562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.121 qpair failed and we were unable to recover it. 00:22:25.121 [2024-05-15 01:09:37.232727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.121 [2024-05-15 01:09:37.232887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.232911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.122 qpair failed and we were unable to recover it. 00:22:25.122 [2024-05-15 01:09:37.233078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.233244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.233269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.122 qpair failed and we were unable to recover it. 00:22:25.122 [2024-05-15 01:09:37.233486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.233646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.233670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.122 qpair failed and we were unable to recover it. 00:22:25.122 [2024-05-15 01:09:37.233828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.234023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.234048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.122 qpair failed and we were unable to recover it. 00:22:25.122 [2024-05-15 01:09:37.234225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.234412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.234439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.122 qpair failed and we were unable to recover it. 00:22:25.122 [2024-05-15 01:09:37.234607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.234790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.234814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.122 qpair failed and we were unable to recover it. 00:22:25.122 [2024-05-15 01:09:37.234978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.235148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.235172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.122 qpair failed and we were unable to recover it. 00:22:25.122 [2024-05-15 01:09:37.235319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.235476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.235501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.122 qpair failed and we were unable to recover it. 00:22:25.122 [2024-05-15 01:09:37.235668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.235829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.235852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.122 qpair failed and we were unable to recover it. 00:22:25.122 [2024-05-15 01:09:37.236010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.236199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.236224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.122 qpair failed and we were unable to recover it. 00:22:25.122 [2024-05-15 01:09:37.236385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.236568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.236592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.122 qpair failed and we were unable to recover it. 00:22:25.122 [2024-05-15 01:09:37.236772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.236963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.236989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.122 qpair failed and we were unable to recover it. 00:22:25.122 [2024-05-15 01:09:37.237155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.237342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.237366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.122 qpair failed and we were unable to recover it. 00:22:25.122 [2024-05-15 01:09:37.237576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.237740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.237766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.122 qpair failed and we were unable to recover it. 00:22:25.122 [2024-05-15 01:09:37.237919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.238089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.238114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.122 qpair failed and we were unable to recover it. 00:22:25.122 [2024-05-15 01:09:37.238323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.238481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.238511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.122 qpair failed and we were unable to recover it. 00:22:25.122 [2024-05-15 01:09:37.238672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.238855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.238879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.122 qpair failed and we were unable to recover it. 00:22:25.122 [2024-05-15 01:09:37.239071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.239241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.239268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.122 qpair failed and we were unable to recover it. 00:22:25.122 [2024-05-15 01:09:37.239455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.239643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.239666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.122 qpair failed and we were unable to recover it. 00:22:25.122 [2024-05-15 01:09:37.239835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.240017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.240042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.122 qpair failed and we were unable to recover it. 00:22:25.122 [2024-05-15 01:09:37.240237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.240465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.240490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.122 qpair failed and we were unable to recover it. 00:22:25.122 [2024-05-15 01:09:37.240681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.240836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.122 [2024-05-15 01:09:37.240863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.122 qpair failed and we were unable to recover it. 00:22:25.122 [2024-05-15 01:09:37.241056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.241235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.241259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.123 qpair failed and we were unable to recover it. 00:22:25.123 [2024-05-15 01:09:37.241413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.241562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.241586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.123 qpair failed and we were unable to recover it. 00:22:25.123 [2024-05-15 01:09:37.241802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.241957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.241982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.123 qpair failed and we were unable to recover it. 00:22:25.123 [2024-05-15 01:09:37.242160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.242313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.242342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.123 qpair failed and we were unable to recover it. 00:22:25.123 [2024-05-15 01:09:37.242494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.242684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.242708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.123 qpair failed and we were unable to recover it. 00:22:25.123 [2024-05-15 01:09:37.242894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.243065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.243090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.123 qpair failed and we were unable to recover it. 00:22:25.123 [2024-05-15 01:09:37.243255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.243409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.243435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.123 qpair failed and we were unable to recover it. 00:22:25.123 [2024-05-15 01:09:37.243603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.243761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.243785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.123 qpair failed and we were unable to recover it. 00:22:25.123 [2024-05-15 01:09:37.243962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.244136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.244161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.123 qpair failed and we were unable to recover it. 00:22:25.123 [2024-05-15 01:09:37.244319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.244526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.244550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.123 qpair failed and we were unable to recover it. 00:22:25.123 [2024-05-15 01:09:37.244712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.244877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.244902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.123 qpair failed and we were unable to recover it. 00:22:25.123 [2024-05-15 01:09:37.245073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.245234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.245258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.123 qpair failed and we were unable to recover it. 00:22:25.123 [2024-05-15 01:09:37.245417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.245624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.245648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.123 qpair failed and we were unable to recover it. 00:22:25.123 [2024-05-15 01:09:37.245809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.245990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.246016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.123 qpair failed and we were unable to recover it. 00:22:25.123 [2024-05-15 01:09:37.246187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.246369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.246394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.123 qpair failed and we were unable to recover it. 00:22:25.123 [2024-05-15 01:09:37.246561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.246715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.246739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.123 qpair failed and we were unable to recover it. 00:22:25.123 [2024-05-15 01:09:37.246915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.247075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.247099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.123 qpair failed and we were unable to recover it. 00:22:25.123 [2024-05-15 01:09:37.247259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.247443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.247467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.123 qpair failed and we were unable to recover it. 00:22:25.123 [2024-05-15 01:09:37.247659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.247849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.247873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.123 qpair failed and we were unable to recover it. 00:22:25.123 [2024-05-15 01:09:37.248037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.248196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.248221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.123 qpair failed and we were unable to recover it. 00:22:25.123 [2024-05-15 01:09:37.248408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.248595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.248619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.123 qpair failed and we were unable to recover it. 00:22:25.123 [2024-05-15 01:09:37.248801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.249010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.249035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.123 qpair failed and we were unable to recover it. 00:22:25.123 [2024-05-15 01:09:37.249215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.249369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.249393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.123 qpair failed and we were unable to recover it. 00:22:25.123 [2024-05-15 01:09:37.249551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.249762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.249787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.123 qpair failed and we were unable to recover it. 00:22:25.123 [2024-05-15 01:09:37.249977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.250156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.250180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.123 qpair failed and we were unable to recover it. 00:22:25.123 [2024-05-15 01:09:37.250358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.250538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.250562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.123 qpair failed and we were unable to recover it. 00:22:25.123 [2024-05-15 01:09:37.250739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.123 [2024-05-15 01:09:37.250895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.250918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.124 qpair failed and we were unable to recover it. 00:22:25.124 [2024-05-15 01:09:37.251116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.251275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.251300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.124 qpair failed and we were unable to recover it. 00:22:25.124 [2024-05-15 01:09:37.251458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.251608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.251633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.124 qpair failed and we were unable to recover it. 00:22:25.124 [2024-05-15 01:09:37.251784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.251995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.252021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.124 qpair failed and we were unable to recover it. 00:22:25.124 [2024-05-15 01:09:37.252206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.252358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.252382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.124 qpair failed and we were unable to recover it. 00:22:25.124 [2024-05-15 01:09:37.252572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.252762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.252787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.124 qpair failed and we were unable to recover it. 00:22:25.124 [2024-05-15 01:09:37.252954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.253115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.253140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.124 qpair failed and we were unable to recover it. 00:22:25.124 [2024-05-15 01:09:37.253334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.253519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.253544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.124 qpair failed and we were unable to recover it. 00:22:25.124 [2024-05-15 01:09:37.253694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.253859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.253883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.124 qpair failed and we were unable to recover it. 00:22:25.124 [2024-05-15 01:09:37.254072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.254229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.254255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.124 qpair failed and we were unable to recover it. 00:22:25.124 [2024-05-15 01:09:37.254442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.254631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.254655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.124 qpair failed and we were unable to recover it. 00:22:25.124 [2024-05-15 01:09:37.254838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.255043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.255068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.124 qpair failed and we were unable to recover it. 00:22:25.124 [2024-05-15 01:09:37.255254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.255467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.255492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.124 qpair failed and we were unable to recover it. 00:22:25.124 [2024-05-15 01:09:37.255645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.255826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.255851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.124 qpair failed and we were unable to recover it. 00:22:25.124 [2024-05-15 01:09:37.256008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.256167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.256191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.124 qpair failed and we were unable to recover it. 00:22:25.124 [2024-05-15 01:09:37.256361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.256515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.256539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.124 qpair failed and we were unable to recover it. 00:22:25.124 [2024-05-15 01:09:37.256730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.256886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.256911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.124 qpair failed and we were unable to recover it. 00:22:25.124 [2024-05-15 01:09:37.257078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.257233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.257257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.124 qpair failed and we were unable to recover it. 00:22:25.124 [2024-05-15 01:09:37.257420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.257573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.257598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.124 qpair failed and we were unable to recover it. 00:22:25.124 [2024-05-15 01:09:37.257778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.257944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.257968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.124 qpair failed and we were unable to recover it. 00:22:25.124 [2024-05-15 01:09:37.258130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.258288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.258314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.124 qpair failed and we were unable to recover it. 00:22:25.124 [2024-05-15 01:09:37.258504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.258660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.258685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.124 qpair failed and we were unable to recover it. 00:22:25.124 [2024-05-15 01:09:37.258862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.259054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.259078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.124 qpair failed and we were unable to recover it. 00:22:25.124 [2024-05-15 01:09:37.259248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.259431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.259456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.124 qpair failed and we were unable to recover it. 00:22:25.124 [2024-05-15 01:09:37.259644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.259799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.259823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.124 qpair failed and we were unable to recover it. 00:22:25.124 [2024-05-15 01:09:37.259985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.260151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.260175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.124 qpair failed and we were unable to recover it. 00:22:25.124 [2024-05-15 01:09:37.260344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.260524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.260549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.124 qpair failed and we were unable to recover it. 00:22:25.124 [2024-05-15 01:09:37.260732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.260889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.260915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.124 qpair failed and we were unable to recover it. 00:22:25.124 [2024-05-15 01:09:37.261084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.124 [2024-05-15 01:09:37.261270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.261299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.125 qpair failed and we were unable to recover it. 00:22:25.125 [2024-05-15 01:09:37.261485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.261662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.261686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.125 qpair failed and we were unable to recover it. 00:22:25.125 [2024-05-15 01:09:37.261843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.262059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.262084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.125 qpair failed and we were unable to recover it. 00:22:25.125 [2024-05-15 01:09:37.262272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.262426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.262450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.125 qpair failed and we were unable to recover it. 00:22:25.125 [2024-05-15 01:09:37.262603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.262781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.262805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.125 qpair failed and we were unable to recover it. 00:22:25.125 [2024-05-15 01:09:37.262959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.263125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.263150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.125 qpair failed and we were unable to recover it. 00:22:25.125 [2024-05-15 01:09:37.263331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.263540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.263564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.125 qpair failed and we were unable to recover it. 00:22:25.125 [2024-05-15 01:09:37.263727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.263909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.263939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.125 qpair failed and we were unable to recover it. 00:22:25.125 [2024-05-15 01:09:37.264128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.264282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.264307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.125 qpair failed and we were unable to recover it. 00:22:25.125 [2024-05-15 01:09:37.264499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.264659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.264683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.125 qpair failed and we were unable to recover it. 00:22:25.125 [2024-05-15 01:09:37.264863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.265017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.265042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.125 qpair failed and we were unable to recover it. 00:22:25.125 [2024-05-15 01:09:37.265214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.265378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.265402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.125 qpair failed and we were unable to recover it. 00:22:25.125 [2024-05-15 01:09:37.265563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.265775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.265800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.125 qpair failed and we were unable to recover it. 00:22:25.125 [2024-05-15 01:09:37.265978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.266134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.266158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.125 qpair failed and we were unable to recover it. 00:22:25.125 [2024-05-15 01:09:37.266343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.266498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.266522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.125 qpair failed and we were unable to recover it. 00:22:25.125 [2024-05-15 01:09:37.266715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.266896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.266920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.125 qpair failed and we were unable to recover it. 00:22:25.125 [2024-05-15 01:09:37.267111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.267304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.267328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.125 qpair failed and we were unable to recover it. 00:22:25.125 [2024-05-15 01:09:37.267482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.267673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.267698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.125 qpair failed and we were unable to recover it. 00:22:25.125 [2024-05-15 01:09:37.267882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.268033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.268058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.125 qpair failed and we were unable to recover it. 00:22:25.125 [2024-05-15 01:09:37.268243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.268404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.268429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.125 qpair failed and we were unable to recover it. 00:22:25.125 [2024-05-15 01:09:37.268585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.268746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.268770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.125 qpair failed and we were unable to recover it. 00:22:25.125 [2024-05-15 01:09:37.268966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.269162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.269187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.125 qpair failed and we were unable to recover it. 00:22:25.125 [2024-05-15 01:09:37.269347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.269532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.269556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.125 qpair failed and we were unable to recover it. 00:22:25.125 [2024-05-15 01:09:37.269739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.269894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.269918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.125 qpair failed and we were unable to recover it. 00:22:25.125 [2024-05-15 01:09:37.270082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.270276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.270301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.125 qpair failed and we were unable to recover it. 00:22:25.125 [2024-05-15 01:09:37.270461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.270615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.270640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.125 qpair failed and we were unable to recover it. 00:22:25.125 [2024-05-15 01:09:37.270853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.271023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.271050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.125 qpair failed and we were unable to recover it. 00:22:25.125 [2024-05-15 01:09:37.271206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.125 [2024-05-15 01:09:37.271366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.271393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.126 qpair failed and we were unable to recover it. 00:22:25.126 [2024-05-15 01:09:37.271590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.271746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.271770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.126 qpair failed and we were unable to recover it. 00:22:25.126 [2024-05-15 01:09:37.271957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.272118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.272142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.126 qpair failed and we were unable to recover it. 00:22:25.126 [2024-05-15 01:09:37.272352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.272527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.272552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.126 qpair failed and we were unable to recover it. 00:22:25.126 [2024-05-15 01:09:37.272710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.272872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.272897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.126 qpair failed and we were unable to recover it. 00:22:25.126 [2024-05-15 01:09:37.273118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.273282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.273306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.126 qpair failed and we were unable to recover it. 00:22:25.126 [2024-05-15 01:09:37.273467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.273650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.273675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.126 qpair failed and we were unable to recover it. 00:22:25.126 [2024-05-15 01:09:37.273829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.273991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.274016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.126 qpair failed and we were unable to recover it. 00:22:25.126 [2024-05-15 01:09:37.274174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.274353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.274378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.126 qpair failed and we were unable to recover it. 00:22:25.126 [2024-05-15 01:09:37.274536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.274719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.274743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.126 qpair failed and we were unable to recover it. 00:22:25.126 [2024-05-15 01:09:37.274925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.275095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.275120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.126 qpair failed and we were unable to recover it. 00:22:25.126 [2024-05-15 01:09:37.275280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.275437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.275463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.126 qpair failed and we were unable to recover it. 00:22:25.126 [2024-05-15 01:09:37.275653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.275803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.275828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.126 qpair failed and we were unable to recover it. 00:22:25.126 [2024-05-15 01:09:37.276010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.276175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.276200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.126 qpair failed and we were unable to recover it. 00:22:25.126 [2024-05-15 01:09:37.276356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.276542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.276567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.126 qpair failed and we were unable to recover it. 00:22:25.126 [2024-05-15 01:09:37.276727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.276879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.276903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.126 qpair failed and we were unable to recover it. 00:22:25.126 [2024-05-15 01:09:37.277101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.277312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.277337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.126 qpair failed and we were unable to recover it. 00:22:25.126 [2024-05-15 01:09:37.277490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.277667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.277692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.126 qpair failed and we were unable to recover it. 00:22:25.126 [2024-05-15 01:09:37.277857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.278015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.278040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.126 qpair failed and we were unable to recover it. 00:22:25.126 [2024-05-15 01:09:37.278191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.278372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.278397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.126 qpair failed and we were unable to recover it. 00:22:25.126 [2024-05-15 01:09:37.278556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.278744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.278769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.126 qpair failed and we were unable to recover it. 00:22:25.126 [2024-05-15 01:09:37.278938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.279146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.279172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.126 qpair failed and we were unable to recover it. 00:22:25.126 [2024-05-15 01:09:37.279354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.279534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.279558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.126 qpair failed and we were unable to recover it. 00:22:25.126 [2024-05-15 01:09:37.279710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.279922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.279963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.126 qpair failed and we were unable to recover it. 00:22:25.126 [2024-05-15 01:09:37.280140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.280293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.280322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.126 qpair failed and we were unable to recover it. 00:22:25.126 [2024-05-15 01:09:37.280481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.280652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.126 [2024-05-15 01:09:37.280677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.127 qpair failed and we were unable to recover it. 00:22:25.127 [2024-05-15 01:09:37.280827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.280985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.281010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.127 qpair failed and we were unable to recover it. 00:22:25.127 [2024-05-15 01:09:37.281176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.281332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.281357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.127 qpair failed and we were unable to recover it. 00:22:25.127 [2024-05-15 01:09:37.281543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.281698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.281722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.127 qpair failed and we were unable to recover it. 00:22:25.127 [2024-05-15 01:09:37.281904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.282072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.282097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.127 qpair failed and we were unable to recover it. 00:22:25.127 [2024-05-15 01:09:37.282256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.282444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.282468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.127 qpair failed and we were unable to recover it. 00:22:25.127 [2024-05-15 01:09:37.282626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.282783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.282809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.127 qpair failed and we were unable to recover it. 00:22:25.127 [2024-05-15 01:09:37.282967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.283160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.283184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.127 qpair failed and we were unable to recover it. 00:22:25.127 [2024-05-15 01:09:37.283396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.283581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.283605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.127 qpair failed and we were unable to recover it. 00:22:25.127 [2024-05-15 01:09:37.283790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.283983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.284008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.127 qpair failed and we were unable to recover it. 00:22:25.127 [2024-05-15 01:09:37.284223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.284379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.284405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.127 qpair failed and we were unable to recover it. 00:22:25.127 [2024-05-15 01:09:37.284586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.284772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.284798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.127 qpair failed and we were unable to recover it. 00:22:25.127 [2024-05-15 01:09:37.284977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.285138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.285162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.127 qpair failed and we were unable to recover it. 00:22:25.127 [2024-05-15 01:09:37.285313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.285485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.285509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.127 qpair failed and we were unable to recover it. 00:22:25.127 [2024-05-15 01:09:37.285697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.285889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.285913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.127 qpair failed and we were unable to recover it. 00:22:25.127 [2024-05-15 01:09:37.286083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.286239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.286264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.127 qpair failed and we were unable to recover it. 00:22:25.127 [2024-05-15 01:09:37.286423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.286615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.286639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.127 qpair failed and we were unable to recover it. 00:22:25.127 [2024-05-15 01:09:37.286812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.286975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.287001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.127 qpair failed and we were unable to recover it. 00:22:25.127 [2024-05-15 01:09:37.287187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.287399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.287424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.127 qpair failed and we were unable to recover it. 00:22:25.127 [2024-05-15 01:09:37.287579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.287764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.287789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.127 qpair failed and we were unable to recover it. 00:22:25.127 [2024-05-15 01:09:37.287958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.288119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.288146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.127 qpair failed and we were unable to recover it. 00:22:25.127 [2024-05-15 01:09:37.288299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.288509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.127 [2024-05-15 01:09:37.288534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.127 qpair failed and we were unable to recover it. 00:22:25.128 [2024-05-15 01:09:37.288717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.288870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.288895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.128 qpair failed and we were unable to recover it. 00:22:25.128 [2024-05-15 01:09:37.289068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.289229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.289253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.128 qpair failed and we were unable to recover it. 00:22:25.128 [2024-05-15 01:09:37.289427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.289575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.289599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.128 qpair failed and we were unable to recover it. 00:22:25.128 [2024-05-15 01:09:37.289792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.289969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.289994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.128 qpair failed and we were unable to recover it. 00:22:25.128 [2024-05-15 01:09:37.290151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.290334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.290359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.128 qpair failed and we were unable to recover it. 00:22:25.128 [2024-05-15 01:09:37.290548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.290739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.290763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.128 qpair failed and we were unable to recover it. 00:22:25.128 [2024-05-15 01:09:37.290914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.291088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.291115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.128 qpair failed and we were unable to recover it. 00:22:25.128 [2024-05-15 01:09:37.291293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.291486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.291511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.128 qpair failed and we were unable to recover it. 00:22:25.128 [2024-05-15 01:09:37.291690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.291868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.291892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.128 qpair failed and we were unable to recover it. 00:22:25.128 [2024-05-15 01:09:37.292057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.292239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.292264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.128 qpair failed and we were unable to recover it. 00:22:25.128 [2024-05-15 01:09:37.292431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.292613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.292638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.128 qpair failed and we were unable to recover it. 00:22:25.128 [2024-05-15 01:09:37.292843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.293008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.293033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.128 qpair failed and we were unable to recover it. 00:22:25.128 [2024-05-15 01:09:37.293186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.293392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.293417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.128 qpair failed and we were unable to recover it. 00:22:25.128 [2024-05-15 01:09:37.293594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.293743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.293767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.128 qpair failed and we were unable to recover it. 00:22:25.128 [2024-05-15 01:09:37.293984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.294146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.294172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.128 qpair failed and we were unable to recover it. 00:22:25.128 [2024-05-15 01:09:37.294392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.294551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.294577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.128 qpair failed and we were unable to recover it. 00:22:25.128 [2024-05-15 01:09:37.294732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.294899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.294924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.128 qpair failed and we were unable to recover it. 00:22:25.128 [2024-05-15 01:09:37.295140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.295301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.295326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.128 qpair failed and we were unable to recover it. 00:22:25.128 [2024-05-15 01:09:37.295477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.295634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.295659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.128 qpair failed and we were unable to recover it. 00:22:25.128 [2024-05-15 01:09:37.295822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.295993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.296018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.128 qpair failed and we were unable to recover it. 00:22:25.128 [2024-05-15 01:09:37.296212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.296368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.296395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.128 qpair failed and we were unable to recover it. 00:22:25.128 [2024-05-15 01:09:37.296580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.296764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.296788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.128 qpair failed and we were unable to recover it. 00:22:25.128 [2024-05-15 01:09:37.297002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.297193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.297218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.128 qpair failed and we were unable to recover it. 00:22:25.128 [2024-05-15 01:09:37.297400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.297559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.297584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.128 qpair failed and we were unable to recover it. 00:22:25.128 [2024-05-15 01:09:37.297768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.297941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.297967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.128 qpair failed and we were unable to recover it. 00:22:25.128 [2024-05-15 01:09:37.298149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.298334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.298358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.128 qpair failed and we were unable to recover it. 00:22:25.128 [2024-05-15 01:09:37.298552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.298710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.298737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.128 qpair failed and we were unable to recover it. 00:22:25.128 [2024-05-15 01:09:37.298897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.299066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.128 [2024-05-15 01:09:37.299092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.129 qpair failed and we were unable to recover it. 00:22:25.129 [2024-05-15 01:09:37.299248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.299432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.299461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.129 qpair failed and we were unable to recover it. 00:22:25.129 [2024-05-15 01:09:37.299655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.299809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.299834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.129 qpair failed and we were unable to recover it. 00:22:25.129 [2024-05-15 01:09:37.300017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.300201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.300226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.129 qpair failed and we were unable to recover it. 00:22:25.129 [2024-05-15 01:09:37.300443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.300604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.300628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.129 qpair failed and we were unable to recover it. 00:22:25.129 [2024-05-15 01:09:37.300783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.300999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.301025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.129 qpair failed and we were unable to recover it. 00:22:25.129 [2024-05-15 01:09:37.301208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.301393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.301417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.129 qpair failed and we were unable to recover it. 00:22:25.129 [2024-05-15 01:09:37.301603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.301756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.301781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.129 qpair failed and we were unable to recover it. 00:22:25.129 [2024-05-15 01:09:37.301943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.302119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.302144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.129 qpair failed and we were unable to recover it. 00:22:25.129 [2024-05-15 01:09:37.302332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.302517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.302544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.129 qpair failed and we were unable to recover it. 00:22:25.129 [2024-05-15 01:09:37.302737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.302933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.302958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.129 qpair failed and we were unable to recover it. 00:22:25.129 [2024-05-15 01:09:37.303147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.303324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.303355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.129 qpair failed and we were unable to recover it. 00:22:25.129 [2024-05-15 01:09:37.303544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.303702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.303726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.129 qpair failed and we were unable to recover it. 00:22:25.129 [2024-05-15 01:09:37.303890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.304085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.304110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.129 qpair failed and we were unable to recover it. 00:22:25.129 [2024-05-15 01:09:37.304267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.304442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.304467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.129 qpair failed and we were unable to recover it. 00:22:25.129 [2024-05-15 01:09:37.304672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.304823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.304847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.129 qpair failed and we were unable to recover it. 00:22:25.129 [2024-05-15 01:09:37.305018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.305180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.305205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.129 qpair failed and we were unable to recover it. 00:22:25.129 [2024-05-15 01:09:37.305396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.305588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.305615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.129 qpair failed and we were unable to recover it. 00:22:25.129 [2024-05-15 01:09:37.305807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.305971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.305996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.129 qpair failed and we were unable to recover it. 00:22:25.129 [2024-05-15 01:09:37.306152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.306329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.306353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.129 qpair failed and we were unable to recover it. 00:22:25.129 [2024-05-15 01:09:37.306535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.306683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.306708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.129 qpair failed and we were unable to recover it. 00:22:25.129 [2024-05-15 01:09:37.306886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.307039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.307064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.129 qpair failed and we were unable to recover it. 00:22:25.129 [2024-05-15 01:09:37.307221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.307371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.307395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.129 qpair failed and we were unable to recover it. 00:22:25.129 [2024-05-15 01:09:37.307600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.307754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.307778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.129 qpair failed and we were unable to recover it. 00:22:25.129 [2024-05-15 01:09:37.307937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.308103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.308128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.129 qpair failed and we were unable to recover it. 00:22:25.129 [2024-05-15 01:09:37.308295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.308452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.308476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.129 qpair failed and we were unable to recover it. 00:22:25.129 [2024-05-15 01:09:37.308637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.308826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.308853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.129 qpair failed and we were unable to recover it. 00:22:25.129 [2024-05-15 01:09:37.309029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.309184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.309210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.129 qpair failed and we were unable to recover it. 00:22:25.129 [2024-05-15 01:09:37.309396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.309554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.309578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.129 qpair failed and we were unable to recover it. 00:22:25.129 [2024-05-15 01:09:37.309731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.129 [2024-05-15 01:09:37.309910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.309941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.130 qpair failed and we were unable to recover it. 00:22:25.130 [2024-05-15 01:09:37.310103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.310272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.310297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.130 qpair failed and we were unable to recover it. 00:22:25.130 [2024-05-15 01:09:37.310485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.310700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.310724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.130 qpair failed and we were unable to recover it. 00:22:25.130 [2024-05-15 01:09:37.310886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.311074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.311100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.130 qpair failed and we were unable to recover it. 00:22:25.130 [2024-05-15 01:09:37.311310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.311524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.311549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.130 qpair failed and we were unable to recover it. 00:22:25.130 [2024-05-15 01:09:37.311707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.311875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.311899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.130 qpair failed and we were unable to recover it. 00:22:25.130 [2024-05-15 01:09:37.312097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.312287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.312311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.130 qpair failed and we were unable to recover it. 00:22:25.130 [2024-05-15 01:09:37.312494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.312649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.312673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.130 qpair failed and we were unable to recover it. 00:22:25.130 [2024-05-15 01:09:37.312855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.313039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.313065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.130 qpair failed and we were unable to recover it. 00:22:25.130 [2024-05-15 01:09:37.313230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.313390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.313415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.130 qpair failed and we were unable to recover it. 00:22:25.130 [2024-05-15 01:09:37.313574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.313729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.313754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.130 qpair failed and we were unable to recover it. 00:22:25.130 [2024-05-15 01:09:37.313951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.314134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.314159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.130 qpair failed and we were unable to recover it. 00:22:25.130 [2024-05-15 01:09:37.314311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.314498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.314522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.130 qpair failed and we were unable to recover it. 00:22:25.130 [2024-05-15 01:09:37.314690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.314856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.314881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.130 qpair failed and we were unable to recover it. 00:22:25.130 [2024-05-15 01:09:37.315092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.315244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.315269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.130 qpair failed and we were unable to recover it. 00:22:25.130 [2024-05-15 01:09:37.315421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.315646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.315670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.130 qpair failed and we were unable to recover it. 00:22:25.130 [2024-05-15 01:09:37.315828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.316041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.316067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.130 qpair failed and we were unable to recover it. 00:22:25.130 [2024-05-15 01:09:37.316224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.316400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.316425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.130 qpair failed and we were unable to recover it. 00:22:25.130 [2024-05-15 01:09:37.316579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.316790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.316814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.130 qpair failed and we were unable to recover it. 00:22:25.130 [2024-05-15 01:09:37.316982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.317169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.317193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.130 qpair failed and we were unable to recover it. 00:22:25.130 [2024-05-15 01:09:37.317349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.317534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.317559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.130 qpair failed and we were unable to recover it. 00:22:25.130 [2024-05-15 01:09:37.317747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.317955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.317980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.130 qpair failed and we were unable to recover it. 00:22:25.130 [2024-05-15 01:09:37.318155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.318308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.318333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.130 qpair failed and we were unable to recover it. 00:22:25.130 [2024-05-15 01:09:37.318498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.318659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.318687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.130 qpair failed and we were unable to recover it. 00:22:25.130 [2024-05-15 01:09:37.318844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.319026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.319051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.130 qpair failed and we were unable to recover it. 00:22:25.130 [2024-05-15 01:09:37.319240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.319416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.319441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.130 qpair failed and we were unable to recover it. 00:22:25.130 [2024-05-15 01:09:37.319594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.319774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.319799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.130 qpair failed and we were unable to recover it. 00:22:25.130 [2024-05-15 01:09:37.319983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.320147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.320171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.130 qpair failed and we were unable to recover it. 00:22:25.130 [2024-05-15 01:09:37.320350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.320535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.320559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.130 qpair failed and we were unable to recover it. 00:22:25.130 [2024-05-15 01:09:37.320724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.320890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.130 [2024-05-15 01:09:37.320915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.131 qpair failed and we were unable to recover it. 00:22:25.131 [2024-05-15 01:09:37.321074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.321234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.321259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.131 qpair failed and we were unable to recover it. 00:22:25.131 [2024-05-15 01:09:37.321423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.321571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.321596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.131 qpair failed and we were unable to recover it. 00:22:25.131 [2024-05-15 01:09:37.321745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.321917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.321947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.131 qpair failed and we were unable to recover it. 00:22:25.131 [2024-05-15 01:09:37.322116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.322275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.322305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.131 qpair failed and we were unable to recover it. 00:22:25.131 [2024-05-15 01:09:37.322469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.322652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.322677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.131 qpair failed and we were unable to recover it. 00:22:25.131 [2024-05-15 01:09:37.322842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.323006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.323032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.131 qpair failed and we were unable to recover it. 00:22:25.131 [2024-05-15 01:09:37.323220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.323387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.323411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.131 qpair failed and we were unable to recover it. 00:22:25.131 [2024-05-15 01:09:37.323609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.323777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.323802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.131 qpair failed and we were unable to recover it. 00:22:25.131 [2024-05-15 01:09:37.323983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.324163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.324188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.131 qpair failed and we were unable to recover it. 00:22:25.131 [2024-05-15 01:09:37.324352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.324504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.324529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.131 qpair failed and we were unable to recover it. 00:22:25.131 [2024-05-15 01:09:37.324720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.324869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.324894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.131 qpair failed and we were unable to recover it. 00:22:25.131 [2024-05-15 01:09:37.325061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.325248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.325272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.131 qpair failed and we were unable to recover it. 00:22:25.131 [2024-05-15 01:09:37.325431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.325614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.325638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.131 qpair failed and we were unable to recover it. 00:22:25.131 [2024-05-15 01:09:37.325830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.325991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.326017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.131 qpair failed and we were unable to recover it. 00:22:25.131 [2024-05-15 01:09:37.326184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.326340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.326365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.131 qpair failed and we were unable to recover it. 00:22:25.131 [2024-05-15 01:09:37.326525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.326680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.326705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.131 qpair failed and we were unable to recover it. 00:22:25.131 [2024-05-15 01:09:37.326854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.327036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.327062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.131 qpair failed and we were unable to recover it. 00:22:25.131 [2024-05-15 01:09:37.327240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.327411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.327436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.131 qpair failed and we were unable to recover it. 00:22:25.131 [2024-05-15 01:09:37.327649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.327798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.327823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.131 qpair failed and we were unable to recover it. 00:22:25.131 [2024-05-15 01:09:37.327987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.328202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.328227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.131 qpair failed and we were unable to recover it. 00:22:25.131 [2024-05-15 01:09:37.328389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.328570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.328595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.131 qpair failed and we were unable to recover it. 00:22:25.131 [2024-05-15 01:09:37.328748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.131 [2024-05-15 01:09:37.328935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.328960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.132 qpair failed and we were unable to recover it. 00:22:25.132 [2024-05-15 01:09:37.329112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.329295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.329319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.132 qpair failed and we were unable to recover it. 00:22:25.132 [2024-05-15 01:09:37.329498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.329686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.329710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.132 qpair failed and we were unable to recover it. 00:22:25.132 [2024-05-15 01:09:37.329876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.330064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.330090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.132 qpair failed and we were unable to recover it. 00:22:25.132 [2024-05-15 01:09:37.330255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.330433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.330458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.132 qpair failed and we were unable to recover it. 00:22:25.132 [2024-05-15 01:09:37.330639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.330815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.330839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.132 qpair failed and we were unable to recover it. 00:22:25.132 [2024-05-15 01:09:37.331006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.331167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.331194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.132 qpair failed and we were unable to recover it. 00:22:25.132 [2024-05-15 01:09:37.331351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.331561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.331586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.132 qpair failed and we were unable to recover it. 00:22:25.132 [2024-05-15 01:09:37.331743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.331928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.331957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.132 qpair failed and we were unable to recover it. 00:22:25.132 [2024-05-15 01:09:37.332138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.332313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.332338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.132 qpair failed and we were unable to recover it. 00:22:25.132 [2024-05-15 01:09:37.332500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.332688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.332713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.132 qpair failed and we were unable to recover it. 00:22:25.132 [2024-05-15 01:09:37.332868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.333046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.333072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.132 qpair failed and we were unable to recover it. 00:22:25.132 [2024-05-15 01:09:37.333249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.333428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.333452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.132 qpair failed and we were unable to recover it. 00:22:25.132 [2024-05-15 01:09:37.333636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.333821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.333845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.132 qpair failed and we were unable to recover it. 00:22:25.132 [2024-05-15 01:09:37.334039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.334187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.334212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.132 qpair failed and we were unable to recover it. 00:22:25.132 [2024-05-15 01:09:37.334369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.334539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.334563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.132 qpair failed and we were unable to recover it. 00:22:25.132 [2024-05-15 01:09:37.334751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.334914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.334947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.132 qpair failed and we were unable to recover it. 00:22:25.132 [2024-05-15 01:09:37.335142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.335293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.335318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.132 qpair failed and we were unable to recover it. 00:22:25.132 [2024-05-15 01:09:37.335522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.335677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.335702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.132 qpair failed and we were unable to recover it. 00:22:25.132 [2024-05-15 01:09:37.335887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.336051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.336076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.132 qpair failed and we were unable to recover it. 00:22:25.132 [2024-05-15 01:09:37.336253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.336434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.336459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.132 qpair failed and we were unable to recover it. 00:22:25.132 [2024-05-15 01:09:37.336647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.336825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.336850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.132 qpair failed and we were unable to recover it. 00:22:25.132 [2024-05-15 01:09:37.337016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.337232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.337257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.132 qpair failed and we were unable to recover it. 00:22:25.132 [2024-05-15 01:09:37.337446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.337605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.337630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.132 qpair failed and we were unable to recover it. 00:22:25.132 [2024-05-15 01:09:37.337813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.338001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.338026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.132 qpair failed and we were unable to recover it. 00:22:25.132 [2024-05-15 01:09:37.338182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.338338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.338361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.132 qpair failed and we were unable to recover it. 00:22:25.132 [2024-05-15 01:09:37.338540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.338750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.338774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.132 qpair failed and we were unable to recover it. 00:22:25.132 [2024-05-15 01:09:37.338938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.339100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.339125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.132 qpair failed and we were unable to recover it. 00:22:25.132 [2024-05-15 01:09:37.339291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.339464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.339489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.132 qpair failed and we were unable to recover it. 00:22:25.132 [2024-05-15 01:09:37.339673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.339864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.132 [2024-05-15 01:09:37.339889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.132 qpair failed and we were unable to recover it. 00:22:25.133 [2024-05-15 01:09:37.340053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.340215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.340240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.133 qpair failed and we were unable to recover it. 00:22:25.133 [2024-05-15 01:09:37.340401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.340582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.340607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.133 qpair failed and we were unable to recover it. 00:22:25.133 [2024-05-15 01:09:37.340813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.340975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.341001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.133 qpair failed and we were unable to recover it. 00:22:25.133 [2024-05-15 01:09:37.341190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.341346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.341375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.133 qpair failed and we were unable to recover it. 00:22:25.133 [2024-05-15 01:09:37.341587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.341741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.341768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.133 qpair failed and we were unable to recover it. 00:22:25.133 [2024-05-15 01:09:37.341937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.342088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.342113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.133 qpair failed and we were unable to recover it. 00:22:25.133 [2024-05-15 01:09:37.342271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.342450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.342475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.133 qpair failed and we were unable to recover it. 00:22:25.133 [2024-05-15 01:09:37.342648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.342831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.342856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.133 qpair failed and we were unable to recover it. 00:22:25.133 [2024-05-15 01:09:37.343014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.343196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.343221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.133 qpair failed and we were unable to recover it. 00:22:25.133 [2024-05-15 01:09:37.343409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.343592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.343617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.133 qpair failed and we were unable to recover it. 00:22:25.133 [2024-05-15 01:09:37.343809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.343997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.344023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.133 qpair failed and we were unable to recover it. 00:22:25.133 [2024-05-15 01:09:37.344204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.344359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.344385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.133 qpair failed and we were unable to recover it. 00:22:25.133 [2024-05-15 01:09:37.344566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.344776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.344800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.133 qpair failed and we were unable to recover it. 00:22:25.133 [2024-05-15 01:09:37.344959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.345112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.345136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.133 qpair failed and we were unable to recover it. 00:22:25.133 [2024-05-15 01:09:37.345303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.345485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.345510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.133 qpair failed and we were unable to recover it. 00:22:25.133 [2024-05-15 01:09:37.345676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.345855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.345880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.133 qpair failed and we were unable to recover it. 00:22:25.133 [2024-05-15 01:09:37.346068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.346236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.346261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.133 qpair failed and we were unable to recover it. 00:22:25.133 [2024-05-15 01:09:37.346409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.346616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.346640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.133 qpair failed and we were unable to recover it. 00:22:25.133 [2024-05-15 01:09:37.346792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.346981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.347006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.133 qpair failed and we were unable to recover it. 00:22:25.133 [2024-05-15 01:09:37.347171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.347346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.347370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.133 qpair failed and we were unable to recover it. 00:22:25.133 [2024-05-15 01:09:37.347558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.347712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.347736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.133 qpair failed and we were unable to recover it. 00:22:25.133 [2024-05-15 01:09:37.347928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.348089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.348114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.133 qpair failed and we were unable to recover it. 00:22:25.133 [2024-05-15 01:09:37.348287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.348467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.348492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.133 qpair failed and we were unable to recover it. 00:22:25.133 [2024-05-15 01:09:37.348674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.348833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.348860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.133 qpair failed and we were unable to recover it. 00:22:25.133 [2024-05-15 01:09:37.349057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.349252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.349278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.133 qpair failed and we were unable to recover it. 00:22:25.133 [2024-05-15 01:09:37.349437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.349624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.349649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.133 qpair failed and we were unable to recover it. 00:22:25.133 [2024-05-15 01:09:37.349824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.350031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.350057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.133 qpair failed and we were unable to recover it. 00:22:25.133 [2024-05-15 01:09:37.350215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.350397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.350422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.133 qpair failed and we were unable to recover it. 00:22:25.133 [2024-05-15 01:09:37.350576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.350754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.350778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.133 qpair failed and we were unable to recover it. 00:22:25.133 [2024-05-15 01:09:37.350953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.133 [2024-05-15 01:09:37.351132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.351157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.134 qpair failed and we were unable to recover it. 00:22:25.134 [2024-05-15 01:09:37.351341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.351527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.351553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.134 qpair failed and we were unable to recover it. 00:22:25.134 [2024-05-15 01:09:37.351735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.351897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.351922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.134 qpair failed and we were unable to recover it. 00:22:25.134 [2024-05-15 01:09:37.352110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.352264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.352289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.134 qpair failed and we were unable to recover it. 00:22:25.134 [2024-05-15 01:09:37.352449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.352602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.352626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.134 qpair failed and we were unable to recover it. 00:22:25.134 [2024-05-15 01:09:37.352818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.352989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.353017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.134 qpair failed and we were unable to recover it. 00:22:25.134 [2024-05-15 01:09:37.353177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.353327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.353351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.134 qpair failed and we were unable to recover it. 00:22:25.134 [2024-05-15 01:09:37.353512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.353727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.353752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.134 qpair failed and we were unable to recover it. 00:22:25.134 [2024-05-15 01:09:37.353905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.354098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.354124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.134 qpair failed and we were unable to recover it. 00:22:25.134 [2024-05-15 01:09:37.354290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.354439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.354463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.134 qpair failed and we were unable to recover it. 00:22:25.134 [2024-05-15 01:09:37.354635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.354814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.354839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.134 qpair failed and we were unable to recover it. 00:22:25.134 [2024-05-15 01:09:37.355039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.355205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.355232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.134 qpair failed and we were unable to recover it. 00:22:25.134 [2024-05-15 01:09:37.355444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.355633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.355658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.134 qpair failed and we were unable to recover it. 00:22:25.134 [2024-05-15 01:09:37.355848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.356003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.356029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.134 qpair failed and we were unable to recover it. 00:22:25.134 [2024-05-15 01:09:37.356214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.356372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.356398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.134 qpair failed and we were unable to recover it. 00:22:25.134 [2024-05-15 01:09:37.356559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.356740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.356765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.134 qpair failed and we were unable to recover it. 00:22:25.134 [2024-05-15 01:09:37.356952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.357116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.357141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.134 qpair failed and we were unable to recover it. 00:22:25.134 [2024-05-15 01:09:37.357308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.357492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.357517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.134 qpair failed and we were unable to recover it. 00:22:25.134 [2024-05-15 01:09:37.357672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.357882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.357907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.134 qpair failed and we were unable to recover it. 00:22:25.134 [2024-05-15 01:09:37.358092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.358277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.358301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.134 qpair failed and we were unable to recover it. 00:22:25.134 [2024-05-15 01:09:37.358463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.358649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.358673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.134 qpair failed and we were unable to recover it. 00:22:25.134 [2024-05-15 01:09:37.358861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.359020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.359045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.134 qpair failed and we were unable to recover it. 00:22:25.134 [2024-05-15 01:09:37.359225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.359401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.359426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.134 qpair failed and we were unable to recover it. 00:22:25.134 [2024-05-15 01:09:37.359649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.359813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.359838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.134 qpair failed and we were unable to recover it. 00:22:25.134 [2024-05-15 01:09:37.360020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.360180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.360204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.134 qpair failed and we were unable to recover it. 00:22:25.134 [2024-05-15 01:09:37.360393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.360546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.360577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.134 qpair failed and we were unable to recover it. 00:22:25.134 [2024-05-15 01:09:37.360772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.360955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.360981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.134 qpair failed and we were unable to recover it. 00:22:25.134 [2024-05-15 01:09:37.361145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.361302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.361326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.134 qpair failed and we were unable to recover it. 00:22:25.134 [2024-05-15 01:09:37.361504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.361721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.134 [2024-05-15 01:09:37.361745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.134 qpair failed and we were unable to recover it. 00:22:25.135 [2024-05-15 01:09:37.361904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.362076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.362101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.135 qpair failed and we were unable to recover it. 00:22:25.135 [2024-05-15 01:09:37.362284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.362466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.362490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.135 qpair failed and we were unable to recover it. 00:22:25.135 [2024-05-15 01:09:37.362643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.362848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.362873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.135 qpair failed and we were unable to recover it. 00:22:25.135 [2024-05-15 01:09:37.363032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.363215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.363240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.135 qpair failed and we were unable to recover it. 00:22:25.135 [2024-05-15 01:09:37.363398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.363547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.363571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.135 qpair failed and we were unable to recover it. 00:22:25.135 [2024-05-15 01:09:37.363727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.363912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.363941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.135 qpair failed and we were unable to recover it. 00:22:25.135 [2024-05-15 01:09:37.364109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.364293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.364317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.135 qpair failed and we were unable to recover it. 00:22:25.135 [2024-05-15 01:09:37.364487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.364682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.364707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.135 qpair failed and we were unable to recover it. 00:22:25.135 [2024-05-15 01:09:37.364895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.365052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.365078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.135 qpair failed and we were unable to recover it. 00:22:25.135 [2024-05-15 01:09:37.365241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.365425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.365451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.135 qpair failed and we were unable to recover it. 00:22:25.135 [2024-05-15 01:09:37.365613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.365762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.365786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.135 qpair failed and we were unable to recover it. 00:22:25.135 [2024-05-15 01:09:37.365947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.366127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.366152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.135 qpair failed and we were unable to recover it. 00:22:25.135 [2024-05-15 01:09:37.366343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.366504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.366528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.135 qpair failed and we were unable to recover it. 00:22:25.135 [2024-05-15 01:09:37.366743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.366937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.366962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.135 qpair failed and we were unable to recover it. 00:22:25.135 [2024-05-15 01:09:37.367121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.367282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.367306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.135 qpair failed and we were unable to recover it. 00:22:25.135 [2024-05-15 01:09:37.367496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.367649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.367674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.135 qpair failed and we were unable to recover it. 00:22:25.135 [2024-05-15 01:09:37.367834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.367992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.368017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.135 qpair failed and we were unable to recover it. 00:22:25.135 [2024-05-15 01:09:37.368188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.368368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.368393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.135 qpair failed and we were unable to recover it. 00:22:25.135 [2024-05-15 01:09:37.368577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.368735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.368761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.135 qpair failed and we were unable to recover it. 00:22:25.135 [2024-05-15 01:09:37.368949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.369101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.369126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.135 qpair failed and we were unable to recover it. 00:22:25.135 [2024-05-15 01:09:37.369322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.369479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.369503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.135 qpair failed and we were unable to recover it. 00:22:25.135 [2024-05-15 01:09:37.369657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.369836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.369860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.135 qpair failed and we were unable to recover it. 00:22:25.135 [2024-05-15 01:09:37.370021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.370210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.370236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.135 qpair failed and we were unable to recover it. 00:22:25.135 [2024-05-15 01:09:37.370417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.370571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.370596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.135 qpair failed and we were unable to recover it. 00:22:25.135 [2024-05-15 01:09:37.370783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.370951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.370976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.135 qpair failed and we were unable to recover it. 00:22:25.135 [2024-05-15 01:09:37.371153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.371327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.135 [2024-05-15 01:09:37.371352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.136 qpair failed and we were unable to recover it. 00:22:25.136 [2024-05-15 01:09:37.371541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.371730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.371754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.136 qpair failed and we were unable to recover it. 00:22:25.136 [2024-05-15 01:09:37.371945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.372112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.372137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.136 qpair failed and we were unable to recover it. 00:22:25.136 [2024-05-15 01:09:37.372292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.372456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.372482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.136 qpair failed and we were unable to recover it. 00:22:25.136 [2024-05-15 01:09:37.372664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.372811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.372835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.136 qpair failed and we were unable to recover it. 00:22:25.136 [2024-05-15 01:09:37.373016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.373176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.373202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.136 qpair failed and we were unable to recover it. 00:22:25.136 [2024-05-15 01:09:37.373360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.373512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.373536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.136 qpair failed and we were unable to recover it. 00:22:25.136 [2024-05-15 01:09:37.373744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.373934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.373959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.136 qpair failed and we were unable to recover it. 00:22:25.136 [2024-05-15 01:09:37.374140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.374295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.374319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.136 qpair failed and we were unable to recover it. 00:22:25.136 [2024-05-15 01:09:37.374507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.374669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.374694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.136 qpair failed and we were unable to recover it. 00:22:25.136 [2024-05-15 01:09:37.374852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.375017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.375042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.136 qpair failed and we were unable to recover it. 00:22:25.136 [2024-05-15 01:09:37.375197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.375347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.375371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.136 qpair failed and we were unable to recover it. 00:22:25.136 [2024-05-15 01:09:37.375573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.375736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.375761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.136 qpair failed and we were unable to recover it. 00:22:25.136 [2024-05-15 01:09:37.375956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.376124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.376149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.136 qpair failed and we were unable to recover it. 00:22:25.136 [2024-05-15 01:09:37.376333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.376486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.376510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.136 qpair failed and we were unable to recover it. 00:22:25.136 [2024-05-15 01:09:37.376692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.376849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.376873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.136 qpair failed and we were unable to recover it. 00:22:25.136 [2024-05-15 01:09:37.377047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.377210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.377234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.136 qpair failed and we were unable to recover it. 00:22:25.136 [2024-05-15 01:09:37.377395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.377555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.377580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.136 qpair failed and we were unable to recover it. 00:22:25.136 [2024-05-15 01:09:37.377764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.377912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.377947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.136 qpair failed and we were unable to recover it. 00:22:25.136 [2024-05-15 01:09:37.378108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.378265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.378290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.136 qpair failed and we were unable to recover it. 00:22:25.136 [2024-05-15 01:09:37.378445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.378656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.378681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.136 qpair failed and we were unable to recover it. 00:22:25.136 [2024-05-15 01:09:37.378830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.379040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.379065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.136 qpair failed and we were unable to recover it. 00:22:25.136 [2024-05-15 01:09:37.379242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.379420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.379449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.136 qpair failed and we were unable to recover it. 00:22:25.136 [2024-05-15 01:09:37.379607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.379771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.379796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.136 qpair failed and we were unable to recover it. 00:22:25.136 [2024-05-15 01:09:37.379953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.380114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.380138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.136 qpair failed and we were unable to recover it. 00:22:25.136 [2024-05-15 01:09:37.380326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.380511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.380535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.136 qpair failed and we were unable to recover it. 00:22:25.136 [2024-05-15 01:09:37.380723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.380885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.380911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.136 qpair failed and we were unable to recover it. 00:22:25.136 [2024-05-15 01:09:37.381079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.381241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.381265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.136 qpair failed and we were unable to recover it. 00:22:25.136 [2024-05-15 01:09:37.381447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.381603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.136 [2024-05-15 01:09:37.381627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.136 qpair failed and we were unable to recover it. 00:22:25.137 [2024-05-15 01:09:37.381816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.381999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.382024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.137 qpair failed and we were unable to recover it. 00:22:25.137 [2024-05-15 01:09:37.382185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.382338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.382362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.137 qpair failed and we were unable to recover it. 00:22:25.137 [2024-05-15 01:09:37.382551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.382733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.382758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.137 qpair failed and we were unable to recover it. 00:22:25.137 [2024-05-15 01:09:37.382963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.383126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.383155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.137 qpair failed and we were unable to recover it. 00:22:25.137 [2024-05-15 01:09:37.383350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.383516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.383540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.137 qpair failed and we were unable to recover it. 00:22:25.137 [2024-05-15 01:09:37.383691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.383866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.383891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.137 qpair failed and we were unable to recover it. 00:22:25.137 [2024-05-15 01:09:37.384064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.384229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.384256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.137 qpair failed and we were unable to recover it. 00:22:25.137 [2024-05-15 01:09:37.384423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.384611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.384636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.137 qpair failed and we were unable to recover it. 00:22:25.137 [2024-05-15 01:09:37.384798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.384992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.385017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.137 qpair failed and we were unable to recover it. 00:22:25.137 [2024-05-15 01:09:37.385175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.385347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.385373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.137 qpair failed and we were unable to recover it. 00:22:25.137 [2024-05-15 01:09:37.385545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.385723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.385749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.137 qpair failed and we were unable to recover it. 00:22:25.137 [2024-05-15 01:09:37.385935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.386112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.386137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.137 qpair failed and we were unable to recover it. 00:22:25.137 [2024-05-15 01:09:37.386311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.386485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.386509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.137 qpair failed and we were unable to recover it. 00:22:25.137 01:09:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:25.137 [2024-05-15 01:09:37.386694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 01:09:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:22:25.137 [2024-05-15 01:09:37.386854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.386880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.137 qpair failed and we were unable to recover it. 00:22:25.137 01:09:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:25.137 [2024-05-15 01:09:37.387086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 01:09:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:25.137 [2024-05-15 01:09:37.387246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 01:09:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:25.137 [2024-05-15 01:09:37.387272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.137 qpair failed and we were unable to recover it. 00:22:25.137 [2024-05-15 01:09:37.387443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.387647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.387672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.137 qpair failed and we were unable to recover it. 00:22:25.137 [2024-05-15 01:09:37.387897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.388107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.388133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.137 qpair failed and we were unable to recover it. 00:22:25.137 [2024-05-15 01:09:37.388292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.388452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.388477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.137 qpair failed and we were unable to recover it. 00:22:25.137 [2024-05-15 01:09:37.388630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.388832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.388857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.137 qpair failed and we were unable to recover it. 00:22:25.137 [2024-05-15 01:09:37.389040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.389204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.389237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.137 qpair failed and we were unable to recover it. 00:22:25.137 [2024-05-15 01:09:37.389416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.389608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.389633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.137 qpair failed and we were unable to recover it. 00:22:25.137 [2024-05-15 01:09:37.389789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.389984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.390009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.137 qpair failed and we were unable to recover it. 00:22:25.137 [2024-05-15 01:09:37.390171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.390384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.390414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.137 qpair failed and we were unable to recover it. 00:22:25.137 [2024-05-15 01:09:37.390614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.390781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.390805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.137 qpair failed and we were unable to recover it. 00:22:25.137 [2024-05-15 01:09:37.390979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.391151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.391177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.137 qpair failed and we were unable to recover it. 00:22:25.137 [2024-05-15 01:09:37.391375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.391536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.391562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.137 qpair failed and we were unable to recover it. 00:22:25.137 [2024-05-15 01:09:37.391710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.391935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.391960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.137 qpair failed and we were unable to recover it. 00:22:25.137 [2024-05-15 01:09:37.392152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.392358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.392385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.137 qpair failed and we were unable to recover it. 00:22:25.137 [2024-05-15 01:09:37.392568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.392726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.137 [2024-05-15 01:09:37.392751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.138 qpair failed and we were unable to recover it. 00:22:25.138 [2024-05-15 01:09:37.392954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.393119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.393146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.138 qpair failed and we were unable to recover it. 00:22:25.138 [2024-05-15 01:09:37.393332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.393524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.393549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.138 qpair failed and we were unable to recover it. 00:22:25.138 [2024-05-15 01:09:37.393712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.393888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.393912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.138 qpair failed and we were unable to recover it. 00:22:25.138 [2024-05-15 01:09:37.394084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.394240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.394269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.138 qpair failed and we were unable to recover it. 00:22:25.138 [2024-05-15 01:09:37.394440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.394595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.394620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.138 qpair failed and we were unable to recover it. 00:22:25.138 [2024-05-15 01:09:37.394782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.394947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.394972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.138 qpair failed and we were unable to recover it. 00:22:25.138 [2024-05-15 01:09:37.395127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.395284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.395315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.138 qpair failed and we were unable to recover it. 00:22:25.138 [2024-05-15 01:09:37.395517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.395676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.395702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.138 qpair failed and we were unable to recover it. 00:22:25.138 [2024-05-15 01:09:37.395856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.396045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.396071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.138 qpair failed and we were unable to recover it. 00:22:25.138 [2024-05-15 01:09:37.396251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.396436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.396462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.138 qpair failed and we were unable to recover it. 00:22:25.138 [2024-05-15 01:09:37.396628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.396790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.396817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.138 qpair failed and we were unable to recover it. 00:22:25.138 [2024-05-15 01:09:37.397006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.397196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.397221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.138 qpair failed and we were unable to recover it. 00:22:25.138 [2024-05-15 01:09:37.397376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.397587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.397612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.138 qpair failed and we were unable to recover it. 00:22:25.138 [2024-05-15 01:09:37.397760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.397935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.397960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.138 qpair failed and we were unable to recover it. 00:22:25.138 [2024-05-15 01:09:37.398155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.398340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.398364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.138 qpair failed and we were unable to recover it. 00:22:25.138 [2024-05-15 01:09:37.398578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.398727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.398752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.138 qpair failed and we were unable to recover it. 00:22:25.138 [2024-05-15 01:09:37.398941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.399150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.399175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.138 qpair failed and we were unable to recover it. 00:22:25.138 [2024-05-15 01:09:37.399369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.399527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.399551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.138 qpair failed and we were unable to recover it. 00:22:25.138 [2024-05-15 01:09:37.399706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.399885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.399910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.138 qpair failed and we were unable to recover it. 00:22:25.138 [2024-05-15 01:09:37.400082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.400243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.400271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.138 qpair failed and we were unable to recover it. 00:22:25.138 [2024-05-15 01:09:37.400448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.400626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.400651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.138 qpair failed and we were unable to recover it. 00:22:25.138 [2024-05-15 01:09:37.400817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.400991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.401018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.138 qpair failed and we were unable to recover it. 00:22:25.138 [2024-05-15 01:09:37.401171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.401374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.401400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.138 qpair failed and we were unable to recover it. 00:22:25.138 [2024-05-15 01:09:37.401563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.401746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.401771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.138 qpair failed and we were unable to recover it. 00:22:25.138 [2024-05-15 01:09:37.401961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.402119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.402143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.138 qpair failed and we were unable to recover it. 00:22:25.138 [2024-05-15 01:09:37.402297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.402460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.402485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.138 qpair failed and we were unable to recover it. 00:22:25.138 [2024-05-15 01:09:37.402645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.402800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.402825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.138 qpair failed and we were unable to recover it. 00:22:25.138 [2024-05-15 01:09:37.403004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.403167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.403192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.138 qpair failed and we were unable to recover it. 00:22:25.138 [2024-05-15 01:09:37.403362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.403544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 [2024-05-15 01:09:37.403569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.138 qpair failed and we were unable to recover it. 00:22:25.138 01:09:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.138 [2024-05-15 01:09:37.403726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.138 01:09:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:25.138 [2024-05-15 01:09:37.403911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.403953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.139 01:09:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.139 qpair failed and we were unable to recover it. 00:22:25.139 01:09:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:25.139 [2024-05-15 01:09:37.404131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.404298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.404323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.139 qpair failed and we were unable to recover it. 00:22:25.139 [2024-05-15 01:09:37.404506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.404661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.404688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.139 qpair failed and we were unable to recover it. 00:22:25.139 [2024-05-15 01:09:37.404854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.405054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.405079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.139 qpair failed and we were unable to recover it. 00:22:25.139 [2024-05-15 01:09:37.405252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.405403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.405428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.139 qpair failed and we were unable to recover it. 00:22:25.139 [2024-05-15 01:09:37.405587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.405740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.405766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.139 qpair failed and we were unable to recover it. 00:22:25.139 [2024-05-15 01:09:37.405934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.406082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.406108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.139 qpair failed and we were unable to recover it. 00:22:25.139 [2024-05-15 01:09:37.406284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.406441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.406467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.139 qpair failed and we were unable to recover it. 00:22:25.139 [2024-05-15 01:09:37.406633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.406798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.406824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.139 qpair failed and we were unable to recover it. 00:22:25.139 [2024-05-15 01:09:37.407012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.407180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.407204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.139 qpair failed and we were unable to recover it. 00:22:25.139 [2024-05-15 01:09:37.407360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.407543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.407569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.139 qpair failed and we were unable to recover it. 00:22:25.139 [2024-05-15 01:09:37.407718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.407877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.407901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.139 qpair failed and we were unable to recover it. 00:22:25.139 [2024-05-15 01:09:37.408071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.408245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.408269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.139 qpair failed and we were unable to recover it. 00:22:25.139 [2024-05-15 01:09:37.408454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.408614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.408639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.139 qpair failed and we were unable to recover it. 00:22:25.139 [2024-05-15 01:09:37.408803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.408959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.408985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.139 qpair failed and we were unable to recover it. 00:22:25.139 [2024-05-15 01:09:37.409158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.409320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.409346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.139 qpair failed and we were unable to recover it. 00:22:25.139 [2024-05-15 01:09:37.409513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.409685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.409710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.139 qpair failed and we were unable to recover it. 00:22:25.139 [2024-05-15 01:09:37.409879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.410059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.410084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.139 qpair failed and we were unable to recover it. 00:22:25.139 [2024-05-15 01:09:37.410249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.410406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.410431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.139 qpair failed and we were unable to recover it. 00:22:25.139 [2024-05-15 01:09:37.410601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.410763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.410787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.139 qpair failed and we were unable to recover it. 00:22:25.139 [2024-05-15 01:09:37.410962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.411122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.411149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.139 qpair failed and we were unable to recover it. 00:22:25.139 [2024-05-15 01:09:37.411346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.411529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.411553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.139 qpair failed and we were unable to recover it. 00:22:25.139 [2024-05-15 01:09:37.411740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.411921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.411952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.139 qpair failed and we were unable to recover it. 00:22:25.139 [2024-05-15 01:09:37.412120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.412291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.412316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.139 qpair failed and we were unable to recover it. 00:22:25.139 [2024-05-15 01:09:37.412473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.412631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.412655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.139 qpair failed and we were unable to recover it. 00:22:25.139 [2024-05-15 01:09:37.412838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.413001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.413027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.139 qpair failed and we were unable to recover it. 00:22:25.139 [2024-05-15 01:09:37.413189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.413384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.413408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.139 qpair failed and we were unable to recover it. 00:22:25.139 [2024-05-15 01:09:37.413564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.413745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.413771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.139 qpair failed and we were unable to recover it. 00:22:25.139 [2024-05-15 01:09:37.413941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.139 [2024-05-15 01:09:37.414102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.414126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.140 qpair failed and we were unable to recover it. 00:22:25.140 [2024-05-15 01:09:37.414284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.414445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.414471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.140 qpair failed and we were unable to recover it. 00:22:25.140 [2024-05-15 01:09:37.414634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.414794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.414819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.140 qpair failed and we were unable to recover it. 00:22:25.140 [2024-05-15 01:09:37.414978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.415148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.415173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.140 qpair failed and we were unable to recover it. 00:22:25.140 [2024-05-15 01:09:37.415350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.415533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.415558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.140 qpair failed and we were unable to recover it. 00:22:25.140 [2024-05-15 01:09:37.415766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.416091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.416117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.140 qpair failed and we were unable to recover it. 00:22:25.140 [2024-05-15 01:09:37.416279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.416461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.416490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.140 qpair failed and we were unable to recover it. 00:22:25.140 [2024-05-15 01:09:37.416659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.416818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.416845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.140 qpair failed and we were unable to recover it. 00:22:25.140 [2024-05-15 01:09:37.417016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.417207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.417232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.140 qpair failed and we were unable to recover it. 00:22:25.140 [2024-05-15 01:09:37.417390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.417581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.417606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.140 qpair failed and we were unable to recover it. 00:22:25.140 [2024-05-15 01:09:37.417897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.418120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.418146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.140 qpair failed and we were unable to recover it. 00:22:25.140 [2024-05-15 01:09:37.418344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.418512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.418539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.140 qpair failed and we were unable to recover it. 00:22:25.140 [2024-05-15 01:09:37.418714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.418874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.418899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.140 qpair failed and we were unable to recover it. 00:22:25.140 [2024-05-15 01:09:37.419077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.419242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.419266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.140 qpair failed and we were unable to recover it. 00:22:25.140 [2024-05-15 01:09:37.419425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.419605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.419630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.140 qpair failed and we were unable to recover it. 00:22:25.140 [2024-05-15 01:09:37.419803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.420001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.420027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.140 qpair failed and we were unable to recover it. 00:22:25.140 [2024-05-15 01:09:37.420327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.420516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.420551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.140 qpair failed and we were unable to recover it. 00:22:25.140 [2024-05-15 01:09:37.420716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.420941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.420967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.140 qpair failed and we were unable to recover it. 00:22:25.140 [2024-05-15 01:09:37.421159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.421352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.421377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.140 qpair failed and we were unable to recover it. 00:22:25.140 [2024-05-15 01:09:37.421546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.421705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.421730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.140 qpair failed and we were unable to recover it. 00:22:25.140 [2024-05-15 01:09:37.421936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.422130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.422155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.140 qpair failed and we were unable to recover it. 00:22:25.140 [2024-05-15 01:09:37.422452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.422645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.422670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.140 qpair failed and we were unable to recover it. 00:22:25.140 [2024-05-15 01:09:37.422840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.423023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.423048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.140 qpair failed and we were unable to recover it. 00:22:25.140 [2024-05-15 01:09:37.423236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.140 [2024-05-15 01:09:37.423404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.423429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.141 qpair failed and we were unable to recover it. 00:22:25.141 [2024-05-15 01:09:37.423593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.423754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.423778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.141 qpair failed and we were unable to recover it. 00:22:25.141 [2024-05-15 01:09:37.423941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.424102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.424128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.141 qpair failed and we were unable to recover it. 00:22:25.141 [2024-05-15 01:09:37.424343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.424505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.424529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.141 qpair failed and we were unable to recover it. 00:22:25.141 [2024-05-15 01:09:37.424713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.424872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.424897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.141 qpair failed and we were unable to recover it. 00:22:25.141 [2024-05-15 01:09:37.425108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.425266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.425291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.141 qpair failed and we were unable to recover it. 00:22:25.141 [2024-05-15 01:09:37.425459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.425643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.425667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.141 qpair failed and we were unable to recover it. 00:22:25.141 [2024-05-15 01:09:37.425832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.426001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.426027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.141 qpair failed and we were unable to recover it. 00:22:25.141 [2024-05-15 01:09:37.426209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.426397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.426422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.141 qpair failed and we were unable to recover it. 00:22:25.141 [2024-05-15 01:09:37.426609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.426783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.426808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.141 qpair failed and we were unable to recover it. 00:22:25.141 Malloc0 00:22:25.141 [2024-05-15 01:09:37.427198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.427385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 01:09:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.141 [2024-05-15 01:09:37.427415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.141 qpair failed and we were unable to recover it. 00:22:25.141 01:09:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:25.141 [2024-05-15 01:09:37.427572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 01:09:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.141 01:09:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:25.141 [2024-05-15 01:09:37.427735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.427761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.141 qpair failed and we were unable to recover it. 00:22:25.141 [2024-05-15 01:09:37.427962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.428127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.428152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.141 qpair failed and we were unable to recover it. 00:22:25.141 [2024-05-15 01:09:37.428337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.428548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.428573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.141 qpair failed and we were unable to recover it. 00:22:25.141 [2024-05-15 01:09:37.428753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.428909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.428939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.141 qpair failed and we were unable to recover it. 00:22:25.141 [2024-05-15 01:09:37.429129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.429328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.429355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.141 qpair failed and we were unable to recover it. 00:22:25.141 [2024-05-15 01:09:37.429520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.429730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.429756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.141 qpair failed and we were unable to recover it. 00:22:25.141 [2024-05-15 01:09:37.429911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.430106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.430132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.141 qpair failed and we were unable to recover it. 00:22:25.141 [2024-05-15 01:09:37.430316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.430503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.430527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.141 qpair failed and we were unable to recover it. 00:22:25.141 [2024-05-15 01:09:37.430699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.430852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.430876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.141 qpair failed and we were unable to recover it. 00:22:25.141 [2024-05-15 01:09:37.430970] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.141 [2024-05-15 01:09:37.431070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.431252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.431277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.141 qpair failed and we were unable to recover it. 00:22:25.141 [2024-05-15 01:09:37.431450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.431609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.431636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.141 qpair failed and we were unable to recover it. 00:22:25.141 [2024-05-15 01:09:37.431793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.431964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.431989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.141 qpair failed and we were unable to recover it. 00:22:25.141 [2024-05-15 01:09:37.432167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.432400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.432424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.141 qpair failed and we were unable to recover it. 00:22:25.141 [2024-05-15 01:09:37.432598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.432780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.432805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.141 qpair failed and we were unable to recover it. 00:22:25.141 [2024-05-15 01:09:37.432994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.433159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.433186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.141 qpair failed and we were unable to recover it. 00:22:25.141 [2024-05-15 01:09:37.433382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.433544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.433569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.141 qpair failed and we were unable to recover it. 00:22:25.141 [2024-05-15 01:09:37.433784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.433956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.141 [2024-05-15 01:09:37.433982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.141 qpair failed and we were unable to recover it. 00:22:25.142 [2024-05-15 01:09:37.434141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.434307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.434331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.142 qpair failed and we were unable to recover it. 00:22:25.142 [2024-05-15 01:09:37.434492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.434658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.434685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.142 qpair failed and we were unable to recover it. 00:22:25.142 [2024-05-15 01:09:37.434865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.435079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.435104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.142 qpair failed and we were unable to recover it. 00:22:25.142 [2024-05-15 01:09:37.435267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.435453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.435478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.142 qpair failed and we were unable to recover it. 00:22:25.142 [2024-05-15 01:09:37.435642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.435805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.435829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.142 qpair failed and we were unable to recover it. 00:22:25.142 [2024-05-15 01:09:37.435993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.436147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.436172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.142 qpair failed and we were unable to recover it. 00:22:25.142 [2024-05-15 01:09:37.436348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.436515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.436540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.142 qpair failed and we were unable to recover it. 00:22:25.142 [2024-05-15 01:09:37.436702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.436882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.436907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.142 qpair failed and we were unable to recover it. 00:22:25.142 [2024-05-15 01:09:37.437083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.437237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.437261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.142 qpair failed and we were unable to recover it. 00:22:25.142 [2024-05-15 01:09:37.437417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.437583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.437607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.142 qpair failed and we were unable to recover it. 00:22:25.142 [2024-05-15 01:09:37.437779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.437966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.437990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.142 qpair failed and we were unable to recover it. 00:22:25.142 [2024-05-15 01:09:37.438154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.438346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.438371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.142 qpair failed and we were unable to recover it. 00:22:25.142 [2024-05-15 01:09:37.438547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.438730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.438754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.142 qpair failed and we were unable to recover it. 00:22:25.142 [2024-05-15 01:09:37.438966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.439136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.439162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.142 qpair failed and we were unable to recover it. 00:22:25.142 01:09:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.142 01:09:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:25.142 [2024-05-15 01:09:37.439354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 01:09:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.142 [2024-05-15 01:09:37.439520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 01:09:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:25.142 [2024-05-15 01:09:37.439545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.142 qpair failed and we were unable to recover it. 00:22:25.142 [2024-05-15 01:09:37.439708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.439861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.439889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.142 qpair failed and we were unable to recover it. 00:22:25.142 [2024-05-15 01:09:37.440064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.440221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.440248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.142 qpair failed and we were unable to recover it. 00:22:25.142 [2024-05-15 01:09:37.440413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.440608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.440633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.142 qpair failed and we were unable to recover it. 00:22:25.142 [2024-05-15 01:09:37.440820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.441002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.441029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.142 qpair failed and we were unable to recover it. 00:22:25.142 [2024-05-15 01:09:37.441198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.441358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.441383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.142 qpair failed and we were unable to recover it. 00:22:25.142 [2024-05-15 01:09:37.441547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.441738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.441766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.142 qpair failed and we were unable to recover it. 00:22:25.142 [2024-05-15 01:09:37.441943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.442138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.442162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.142 qpair failed and we were unable to recover it. 00:22:25.142 [2024-05-15 01:09:37.442334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.442527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.442551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.142 qpair failed and we were unable to recover it. 00:22:25.142 [2024-05-15 01:09:37.442708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.442867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.442892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.142 qpair failed and we were unable to recover it. 00:22:25.142 [2024-05-15 01:09:37.443063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.443231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.443256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.142 qpair failed and we were unable to recover it. 00:22:25.142 [2024-05-15 01:09:37.443420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.443582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.443611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.142 qpair failed and we were unable to recover it. 00:22:25.142 [2024-05-15 01:09:37.443779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.443939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.142 [2024-05-15 01:09:37.443967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.142 qpair failed and we were unable to recover it. 00:22:25.143 [2024-05-15 01:09:37.444136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.444310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.444335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.143 qpair failed and we were unable to recover it. 00:22:25.143 [2024-05-15 01:09:37.444492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.444678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.444702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.143 qpair failed and we were unable to recover it. 00:22:25.143 [2024-05-15 01:09:37.444880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.445039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.445064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.143 qpair failed and we were unable to recover it. 00:22:25.143 [2024-05-15 01:09:37.445233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.445422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.445447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.143 qpair failed and we were unable to recover it. 00:22:25.143 [2024-05-15 01:09:37.445623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.445776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.445800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.143 qpair failed and we were unable to recover it. 00:22:25.143 [2024-05-15 01:09:37.445979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.446144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.446169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.143 qpair failed and we were unable to recover it. 00:22:25.143 [2024-05-15 01:09:37.446367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.446557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.446581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.143 qpair failed and we were unable to recover it. 00:22:25.143 [2024-05-15 01:09:37.446740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.446979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.447008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.143 qpair failed and we were unable to recover it. 00:22:25.143 [2024-05-15 01:09:37.447186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 01:09:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.143 01:09:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:25.143 [2024-05-15 01:09:37.447356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.447383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.143 qpair failed and we were unable to recover it. 00:22:25.143 01:09:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.143 [2024-05-15 01:09:37.447542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 01:09:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:25.143 [2024-05-15 01:09:37.447736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.447761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.143 qpair failed and we were unable to recover it. 00:22:25.143 [2024-05-15 01:09:37.447953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.448136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.448161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.143 qpair failed and we were unable to recover it. 00:22:25.143 [2024-05-15 01:09:37.448319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.448483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.448508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.143 qpair failed and we were unable to recover it. 00:22:25.143 [2024-05-15 01:09:37.448665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.448834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.448858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.143 qpair failed and we were unable to recover it. 00:22:25.143 [2024-05-15 01:09:37.449040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.449203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.449228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.143 qpair failed and we were unable to recover it. 00:22:25.143 [2024-05-15 01:09:37.449389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.449547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.449572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.143 qpair failed and we were unable to recover it. 00:22:25.143 [2024-05-15 01:09:37.449735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.449926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.449957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.143 qpair failed and we were unable to recover it. 00:22:25.143 [2024-05-15 01:09:37.450114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.450270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.450295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.143 qpair failed and we were unable to recover it. 00:22:25.143 [2024-05-15 01:09:37.450469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.450648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.450673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.143 qpair failed and we were unable to recover it. 00:22:25.143 [2024-05-15 01:09:37.450850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.451018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.451043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.143 qpair failed and we were unable to recover it. 00:22:25.143 [2024-05-15 01:09:37.451197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.451389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.451414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.143 qpair failed and we were unable to recover it. 00:22:25.143 [2024-05-15 01:09:37.451580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.451765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.451792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.143 qpair failed and we were unable to recover it. 00:22:25.143 [2024-05-15 01:09:37.451983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.452153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.452178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.143 qpair failed and we were unable to recover it. 00:22:25.143 [2024-05-15 01:09:37.452351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.452510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.452534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.143 qpair failed and we were unable to recover it. 00:22:25.143 [2024-05-15 01:09:37.452698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.452869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.452893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.143 qpair failed and we were unable to recover it. 00:22:25.143 [2024-05-15 01:09:37.453067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.453261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.453286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.143 qpair failed and we were unable to recover it. 00:22:25.143 [2024-05-15 01:09:37.453470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.453636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.453660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.143 qpair failed and we were unable to recover it. 00:22:25.143 [2024-05-15 01:09:37.453843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.454022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.454048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.143 qpair failed and we were unable to recover it. 00:22:25.143 [2024-05-15 01:09:37.454212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.454368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.143 [2024-05-15 01:09:37.454393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.143 qpair failed and we were unable to recover it. 00:22:25.143 [2024-05-15 01:09:37.454561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.144 [2024-05-15 01:09:37.454719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.144 [2024-05-15 01:09:37.454743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.144 qpair failed and we were unable to recover it. 00:22:25.144 [2024-05-15 01:09:37.455012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.144 [2024-05-15 01:09:37.455237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.144 01:09:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.144 [2024-05-15 01:09:37.455266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.144 qpair failed and we were unable to recover it. 00:22:25.144 01:09:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:25.144 [2024-05-15 01:09:37.455459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.144 01:09:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.144 01:09:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:25.144 [2024-05-15 01:09:37.455653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.144 [2024-05-15 01:09:37.455689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.144 qpair failed and we were unable to recover it. 00:22:25.144 [2024-05-15 01:09:37.455878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.144 [2024-05-15 01:09:37.456085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.144 [2024-05-15 01:09:37.456113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.144 qpair failed and we were unable to recover it. 00:22:25.144 [2024-05-15 01:09:37.456292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.144 [2024-05-15 01:09:37.456452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.144 [2024-05-15 01:09:37.456477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.144 qpair failed and we were unable to recover it. 00:22:25.144 [2024-05-15 01:09:37.456640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.144 [2024-05-15 01:09:37.456803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.144 [2024-05-15 01:09:37.456827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.144 qpair failed and we were unable to recover it. 00:22:25.144 [2024-05-15 01:09:37.456996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.144 [2024-05-15 01:09:37.457165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.144 [2024-05-15 01:09:37.457190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.144 qpair failed and we were unable to recover it. 00:22:25.144 [2024-05-15 01:09:37.457382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.144 [2024-05-15 01:09:37.457537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.144 [2024-05-15 01:09:37.457561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.144 qpair failed and we were unable to recover it. 00:22:25.144 [2024-05-15 01:09:37.457746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.144 [2024-05-15 01:09:37.457906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.144 [2024-05-15 01:09:37.457937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.144 qpair failed and we were unable to recover it. 00:22:25.144 [2024-05-15 01:09:37.458096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.144 [2024-05-15 01:09:37.458253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.144 [2024-05-15 01:09:37.458278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.144 qpair failed and we were unable to recover it. 00:22:25.144 [2024-05-15 01:09:37.458439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.144 [2024-05-15 01:09:37.458591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.144 [2024-05-15 01:09:37.458615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.144 qpair failed and we were unable to recover it. 00:22:25.144 [2024-05-15 01:09:37.458801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.144 [2024-05-15 01:09:37.458960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.144 [2024-05-15 01:09:37.458967] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:25.144 [2024-05-15 01:09:37.458985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e75420 with addr=10.0.0.2, port=4420 00:22:25.144 qpair failed and we were unable to recover it. 00:22:25.144 [2024-05-15 01:09:37.459156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.144 [2024-05-15 01:09:37.459354] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.144 [2024-05-15 01:09:37.462129] posix.c: 675:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:22:25.144 [2024-05-15 01:09:37.462187] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e75420 (107): Transport endpoint is not connected 00:22:25.144 [2024-05-15 01:09:37.462264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.144 qpair failed and we were unable to recover it. 00:22:25.144 01:09:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.144 01:09:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:25.144 01:09:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.144 01:09:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:25.144 01:09:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.144 01:09:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@58 -- # wait 1347816 00:22:25.144 [2024-05-15 01:09:37.471668] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.144 [2024-05-15 01:09:37.471858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.144 [2024-05-15 01:09:37.471889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.144 [2024-05-15 01:09:37.471923] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.144 [2024-05-15 01:09:37.471948] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.144 [2024-05-15 01:09:37.471979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.144 qpair failed and we were unable to recover it. 00:22:25.144 [2024-05-15 01:09:37.481656] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.144 [2024-05-15 01:09:37.481821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.144 [2024-05-15 01:09:37.481848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.144 [2024-05-15 01:09:37.481863] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.144 [2024-05-15 01:09:37.481875] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.144 [2024-05-15 01:09:37.481904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.144 qpair failed and we were unable to recover it. 00:22:25.144 [2024-05-15 01:09:37.491577] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.144 [2024-05-15 01:09:37.491750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.144 [2024-05-15 01:09:37.491776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.144 [2024-05-15 01:09:37.491791] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.144 [2024-05-15 01:09:37.491804] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.144 [2024-05-15 01:09:37.491832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.144 qpair failed and we were unable to recover it. 00:22:25.405 [2024-05-15 01:09:37.501577] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.405 [2024-05-15 01:09:37.501749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.405 [2024-05-15 01:09:37.501776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.405 [2024-05-15 01:09:37.501791] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.405 [2024-05-15 01:09:37.501803] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.405 [2024-05-15 01:09:37.501831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.405 qpair failed and we were unable to recover it. 00:22:25.405 [2024-05-15 01:09:37.511613] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.405 [2024-05-15 01:09:37.511775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.405 [2024-05-15 01:09:37.511800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.405 [2024-05-15 01:09:37.511815] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.405 [2024-05-15 01:09:37.511827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.405 [2024-05-15 01:09:37.511855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.405 qpair failed and we were unable to recover it. 00:22:25.405 [2024-05-15 01:09:37.521636] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.405 [2024-05-15 01:09:37.521795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.405 [2024-05-15 01:09:37.521820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.405 [2024-05-15 01:09:37.521834] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.405 [2024-05-15 01:09:37.521846] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.405 [2024-05-15 01:09:37.521874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.405 qpair failed and we were unable to recover it. 00:22:25.405 [2024-05-15 01:09:37.531644] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.405 [2024-05-15 01:09:37.531810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.405 [2024-05-15 01:09:37.531836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.405 [2024-05-15 01:09:37.531850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.405 [2024-05-15 01:09:37.531862] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.405 [2024-05-15 01:09:37.531890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.405 qpair failed and we were unable to recover it. 00:22:25.405 [2024-05-15 01:09:37.541687] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.405 [2024-05-15 01:09:37.541855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.405 [2024-05-15 01:09:37.541880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.405 [2024-05-15 01:09:37.541895] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.405 [2024-05-15 01:09:37.541907] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.405 [2024-05-15 01:09:37.541942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.405 qpair failed and we were unable to recover it. 00:22:25.405 [2024-05-15 01:09:37.551738] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.405 [2024-05-15 01:09:37.551946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.405 [2024-05-15 01:09:37.551971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.405 [2024-05-15 01:09:37.551986] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.405 [2024-05-15 01:09:37.551998] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.405 [2024-05-15 01:09:37.552027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.405 qpair failed and we were unable to recover it. 00:22:25.405 [2024-05-15 01:09:37.561808] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.405 [2024-05-15 01:09:37.561974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.405 [2024-05-15 01:09:37.562007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.405 [2024-05-15 01:09:37.562023] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.405 [2024-05-15 01:09:37.562035] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.405 [2024-05-15 01:09:37.562063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.405 qpair failed and we were unable to recover it. 00:22:25.405 [2024-05-15 01:09:37.571771] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.405 [2024-05-15 01:09:37.571951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.405 [2024-05-15 01:09:37.571977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.405 [2024-05-15 01:09:37.571991] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.405 [2024-05-15 01:09:37.572003] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.405 [2024-05-15 01:09:37.572031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.405 qpair failed and we were unable to recover it. 00:22:25.405 [2024-05-15 01:09:37.581781] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.405 [2024-05-15 01:09:37.581954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.405 [2024-05-15 01:09:37.581980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.405 [2024-05-15 01:09:37.581995] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.405 [2024-05-15 01:09:37.582007] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.405 [2024-05-15 01:09:37.582035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.405 qpair failed and we were unable to recover it. 00:22:25.405 [2024-05-15 01:09:37.591894] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.405 [2024-05-15 01:09:37.592094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.405 [2024-05-15 01:09:37.592120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.405 [2024-05-15 01:09:37.592134] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.405 [2024-05-15 01:09:37.592147] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.405 [2024-05-15 01:09:37.592174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.405 qpair failed and we were unable to recover it. 00:22:25.405 [2024-05-15 01:09:37.601883] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.405 [2024-05-15 01:09:37.602049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.405 [2024-05-15 01:09:37.602075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.405 [2024-05-15 01:09:37.602089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.405 [2024-05-15 01:09:37.602101] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.405 [2024-05-15 01:09:37.602129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.405 qpair failed and we were unable to recover it. 00:22:25.405 [2024-05-15 01:09:37.611888] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.405 [2024-05-15 01:09:37.612068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.405 [2024-05-15 01:09:37.612094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.405 [2024-05-15 01:09:37.612109] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.405 [2024-05-15 01:09:37.612121] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.405 [2024-05-15 01:09:37.612150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.405 qpair failed and we were unable to recover it. 00:22:25.405 [2024-05-15 01:09:37.621917] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.405 [2024-05-15 01:09:37.622102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.405 [2024-05-15 01:09:37.622127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.405 [2024-05-15 01:09:37.622142] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.405 [2024-05-15 01:09:37.622154] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.405 [2024-05-15 01:09:37.622182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.405 qpair failed and we were unable to recover it. 00:22:25.405 [2024-05-15 01:09:37.632013] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.405 [2024-05-15 01:09:37.632172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.406 [2024-05-15 01:09:37.632197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.406 [2024-05-15 01:09:37.632212] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.406 [2024-05-15 01:09:37.632224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.406 [2024-05-15 01:09:37.632251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.406 qpair failed and we were unable to recover it. 00:22:25.406 [2024-05-15 01:09:37.641993] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.406 [2024-05-15 01:09:37.642156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.406 [2024-05-15 01:09:37.642181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.406 [2024-05-15 01:09:37.642196] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.406 [2024-05-15 01:09:37.642209] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.406 [2024-05-15 01:09:37.642237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.406 qpair failed and we were unable to recover it. 00:22:25.406 [2024-05-15 01:09:37.652014] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.406 [2024-05-15 01:09:37.652187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.406 [2024-05-15 01:09:37.652226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.406 [2024-05-15 01:09:37.652241] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.406 [2024-05-15 01:09:37.652254] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.406 [2024-05-15 01:09:37.652282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.406 qpair failed and we were unable to recover it. 00:22:25.406 [2024-05-15 01:09:37.662059] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.406 [2024-05-15 01:09:37.662237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.406 [2024-05-15 01:09:37.662263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.406 [2024-05-15 01:09:37.662278] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.406 [2024-05-15 01:09:37.662291] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.406 [2024-05-15 01:09:37.662319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.406 qpair failed and we were unable to recover it. 00:22:25.406 [2024-05-15 01:09:37.672270] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.406 [2024-05-15 01:09:37.672463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.406 [2024-05-15 01:09:37.672489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.406 [2024-05-15 01:09:37.672504] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.406 [2024-05-15 01:09:37.672516] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.406 [2024-05-15 01:09:37.672545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.406 qpair failed and we were unable to recover it. 00:22:25.406 [2024-05-15 01:09:37.682152] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.406 [2024-05-15 01:09:37.682314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.406 [2024-05-15 01:09:37.682340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.406 [2024-05-15 01:09:37.682354] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.406 [2024-05-15 01:09:37.682367] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.406 [2024-05-15 01:09:37.682395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.406 qpair failed and we were unable to recover it. 00:22:25.406 [2024-05-15 01:09:37.692190] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.406 [2024-05-15 01:09:37.692370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.406 [2024-05-15 01:09:37.692403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.406 [2024-05-15 01:09:37.692418] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.406 [2024-05-15 01:09:37.692430] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.406 [2024-05-15 01:09:37.692474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.406 qpair failed and we were unable to recover it. 00:22:25.406 [2024-05-15 01:09:37.702197] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.406 [2024-05-15 01:09:37.702372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.406 [2024-05-15 01:09:37.702397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.406 [2024-05-15 01:09:37.702412] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.406 [2024-05-15 01:09:37.702425] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.406 [2024-05-15 01:09:37.702452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.406 qpair failed and we were unable to recover it. 00:22:25.406 [2024-05-15 01:09:37.712186] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.406 [2024-05-15 01:09:37.712370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.406 [2024-05-15 01:09:37.712395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.406 [2024-05-15 01:09:37.712409] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.406 [2024-05-15 01:09:37.712421] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.406 [2024-05-15 01:09:37.712448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.406 qpair failed and we were unable to recover it. 00:22:25.406 [2024-05-15 01:09:37.722271] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.406 [2024-05-15 01:09:37.722436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.406 [2024-05-15 01:09:37.722461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.406 [2024-05-15 01:09:37.722476] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.406 [2024-05-15 01:09:37.722488] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.406 [2024-05-15 01:09:37.722516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.406 qpair failed and we were unable to recover it. 00:22:25.406 [2024-05-15 01:09:37.732202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.406 [2024-05-15 01:09:37.732375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.406 [2024-05-15 01:09:37.732400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.406 [2024-05-15 01:09:37.732415] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.406 [2024-05-15 01:09:37.732427] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.406 [2024-05-15 01:09:37.732454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.406 qpair failed and we were unable to recover it. 00:22:25.406 [2024-05-15 01:09:37.742320] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.406 [2024-05-15 01:09:37.742491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.406 [2024-05-15 01:09:37.742522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.406 [2024-05-15 01:09:37.742537] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.406 [2024-05-15 01:09:37.742549] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.406 [2024-05-15 01:09:37.742578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.406 qpair failed and we were unable to recover it. 00:22:25.406 [2024-05-15 01:09:37.752379] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.406 [2024-05-15 01:09:37.752549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.406 [2024-05-15 01:09:37.752575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.406 [2024-05-15 01:09:37.752590] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.406 [2024-05-15 01:09:37.752602] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.406 [2024-05-15 01:09:37.752629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.406 qpair failed and we were unable to recover it. 00:22:25.406 [2024-05-15 01:09:37.762356] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.406 [2024-05-15 01:09:37.762534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.406 [2024-05-15 01:09:37.762560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.406 [2024-05-15 01:09:37.762575] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.406 [2024-05-15 01:09:37.762587] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.406 [2024-05-15 01:09:37.762615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.406 qpair failed and we were unable to recover it. 00:22:25.406 [2024-05-15 01:09:37.772361] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.407 [2024-05-15 01:09:37.772535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.407 [2024-05-15 01:09:37.772560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.407 [2024-05-15 01:09:37.772575] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.407 [2024-05-15 01:09:37.772587] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.407 [2024-05-15 01:09:37.772614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.407 qpair failed and we were unable to recover it. 00:22:25.407 [2024-05-15 01:09:37.782399] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.407 [2024-05-15 01:09:37.782563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.407 [2024-05-15 01:09:37.782589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.407 [2024-05-15 01:09:37.782604] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.407 [2024-05-15 01:09:37.782616] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.407 [2024-05-15 01:09:37.782649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.407 qpair failed and we were unable to recover it. 00:22:25.407 [2024-05-15 01:09:37.792484] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.407 [2024-05-15 01:09:37.792658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.407 [2024-05-15 01:09:37.792684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.407 [2024-05-15 01:09:37.792702] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.407 [2024-05-15 01:09:37.792715] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.407 [2024-05-15 01:09:37.792744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.407 qpair failed and we were unable to recover it. 00:22:25.666 [2024-05-15 01:09:37.802507] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.666 [2024-05-15 01:09:37.802708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.666 [2024-05-15 01:09:37.802733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.667 [2024-05-15 01:09:37.802748] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.667 [2024-05-15 01:09:37.802761] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.667 [2024-05-15 01:09:37.802788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.667 qpair failed and we were unable to recover it. 00:22:25.667 [2024-05-15 01:09:37.812478] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.667 [2024-05-15 01:09:37.812648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.667 [2024-05-15 01:09:37.812674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.667 [2024-05-15 01:09:37.812689] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.667 [2024-05-15 01:09:37.812701] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.667 [2024-05-15 01:09:37.812728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.667 qpair failed and we were unable to recover it. 00:22:25.667 [2024-05-15 01:09:37.822533] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.667 [2024-05-15 01:09:37.822737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.667 [2024-05-15 01:09:37.822762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.667 [2024-05-15 01:09:37.822777] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.667 [2024-05-15 01:09:37.822789] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.667 [2024-05-15 01:09:37.822817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.667 qpair failed and we were unable to recover it. 00:22:25.667 [2024-05-15 01:09:37.832529] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.667 [2024-05-15 01:09:37.832690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.667 [2024-05-15 01:09:37.832720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.667 [2024-05-15 01:09:37.832736] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.667 [2024-05-15 01:09:37.832748] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.667 [2024-05-15 01:09:37.832775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.667 qpair failed and we were unable to recover it. 00:22:25.667 [2024-05-15 01:09:37.842574] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.667 [2024-05-15 01:09:37.842745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.667 [2024-05-15 01:09:37.842770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.667 [2024-05-15 01:09:37.842785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.667 [2024-05-15 01:09:37.842797] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.667 [2024-05-15 01:09:37.842825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.667 qpair failed and we were unable to recover it. 00:22:25.667 [2024-05-15 01:09:37.852583] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.667 [2024-05-15 01:09:37.852752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.667 [2024-05-15 01:09:37.852778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.667 [2024-05-15 01:09:37.852792] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.667 [2024-05-15 01:09:37.852804] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.667 [2024-05-15 01:09:37.852832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.667 qpair failed and we were unable to recover it. 00:22:25.667 [2024-05-15 01:09:37.862652] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.667 [2024-05-15 01:09:37.862824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.667 [2024-05-15 01:09:37.862850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.667 [2024-05-15 01:09:37.862865] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.667 [2024-05-15 01:09:37.862877] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.667 [2024-05-15 01:09:37.862904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.667 qpair failed and we were unable to recover it. 00:22:25.667 [2024-05-15 01:09:37.872654] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.667 [2024-05-15 01:09:37.872811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.667 [2024-05-15 01:09:37.872837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.667 [2024-05-15 01:09:37.872851] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.667 [2024-05-15 01:09:37.872869] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.667 [2024-05-15 01:09:37.872897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.667 qpair failed and we were unable to recover it. 00:22:25.667 [2024-05-15 01:09:37.882683] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.667 [2024-05-15 01:09:37.882893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.667 [2024-05-15 01:09:37.882918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.667 [2024-05-15 01:09:37.882938] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.667 [2024-05-15 01:09:37.882953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.667 [2024-05-15 01:09:37.882980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.667 qpair failed and we were unable to recover it. 00:22:25.667 [2024-05-15 01:09:37.892729] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.667 [2024-05-15 01:09:37.892947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.667 [2024-05-15 01:09:37.892973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.667 [2024-05-15 01:09:37.892987] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.667 [2024-05-15 01:09:37.893000] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.667 [2024-05-15 01:09:37.893028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.667 qpair failed and we were unable to recover it. 00:22:25.667 [2024-05-15 01:09:37.902728] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.667 [2024-05-15 01:09:37.902905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.667 [2024-05-15 01:09:37.902936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.667 [2024-05-15 01:09:37.902953] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.667 [2024-05-15 01:09:37.902965] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.667 [2024-05-15 01:09:37.902993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.667 qpair failed and we were unable to recover it. 00:22:25.667 [2024-05-15 01:09:37.912785] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.667 [2024-05-15 01:09:37.912960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.667 [2024-05-15 01:09:37.912985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.667 [2024-05-15 01:09:37.913000] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.667 [2024-05-15 01:09:37.913013] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.667 [2024-05-15 01:09:37.913040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.667 qpair failed and we were unable to recover it. 00:22:25.667 [2024-05-15 01:09:37.922807] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.667 [2024-05-15 01:09:37.922987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.667 [2024-05-15 01:09:37.923018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.667 [2024-05-15 01:09:37.923033] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.667 [2024-05-15 01:09:37.923046] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.667 [2024-05-15 01:09:37.923074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.667 qpair failed and we were unable to recover it. 00:22:25.667 [2024-05-15 01:09:37.932822] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.667 [2024-05-15 01:09:37.933038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.667 [2024-05-15 01:09:37.933063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.667 [2024-05-15 01:09:37.933077] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.667 [2024-05-15 01:09:37.933090] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.667 [2024-05-15 01:09:37.933118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.667 qpair failed and we were unable to recover it. 00:22:25.667 [2024-05-15 01:09:37.942970] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.668 [2024-05-15 01:09:37.943132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.668 [2024-05-15 01:09:37.943157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.668 [2024-05-15 01:09:37.943172] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.668 [2024-05-15 01:09:37.943184] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.668 [2024-05-15 01:09:37.943213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.668 qpair failed and we were unable to recover it. 00:22:25.668 [2024-05-15 01:09:37.952914] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.668 [2024-05-15 01:09:37.953117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.668 [2024-05-15 01:09:37.953144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.668 [2024-05-15 01:09:37.953163] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.668 [2024-05-15 01:09:37.953176] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.668 [2024-05-15 01:09:37.953205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.668 qpair failed and we were unable to recover it. 00:22:25.668 [2024-05-15 01:09:37.962908] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.668 [2024-05-15 01:09:37.963083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.668 [2024-05-15 01:09:37.963110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.668 [2024-05-15 01:09:37.963130] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.668 [2024-05-15 01:09:37.963148] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.668 [2024-05-15 01:09:37.963177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.668 qpair failed and we were unable to recover it. 00:22:25.668 [2024-05-15 01:09:37.972955] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.668 [2024-05-15 01:09:37.973121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.668 [2024-05-15 01:09:37.973146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.668 [2024-05-15 01:09:37.973161] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.668 [2024-05-15 01:09:37.973173] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.668 [2024-05-15 01:09:37.973202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.668 qpair failed and we were unable to recover it. 00:22:25.668 [2024-05-15 01:09:37.982963] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.668 [2024-05-15 01:09:37.983155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.668 [2024-05-15 01:09:37.983181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.668 [2024-05-15 01:09:37.983196] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.668 [2024-05-15 01:09:37.983208] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.668 [2024-05-15 01:09:37.983235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.668 qpair failed and we were unable to recover it. 00:22:25.668 [2024-05-15 01:09:37.992980] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.668 [2024-05-15 01:09:37.993138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.668 [2024-05-15 01:09:37.993163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.668 [2024-05-15 01:09:37.993178] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.668 [2024-05-15 01:09:37.993190] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.668 [2024-05-15 01:09:37.993218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.668 qpair failed and we were unable to recover it. 00:22:25.668 [2024-05-15 01:09:38.003105] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.668 [2024-05-15 01:09:38.003292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.668 [2024-05-15 01:09:38.003317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.668 [2024-05-15 01:09:38.003331] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.668 [2024-05-15 01:09:38.003344] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.668 [2024-05-15 01:09:38.003372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.668 qpair failed and we were unable to recover it. 00:22:25.668 [2024-05-15 01:09:38.013052] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.668 [2024-05-15 01:09:38.013221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.668 [2024-05-15 01:09:38.013246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.668 [2024-05-15 01:09:38.013261] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.668 [2024-05-15 01:09:38.013273] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.668 [2024-05-15 01:09:38.013301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.668 qpair failed and we were unable to recover it. 00:22:25.668 [2024-05-15 01:09:38.023133] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.668 [2024-05-15 01:09:38.023312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.668 [2024-05-15 01:09:38.023338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.668 [2024-05-15 01:09:38.023353] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.668 [2024-05-15 01:09:38.023365] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.668 [2024-05-15 01:09:38.023393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.668 qpair failed and we were unable to recover it. 00:22:25.668 [2024-05-15 01:09:38.033101] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.668 [2024-05-15 01:09:38.033267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.668 [2024-05-15 01:09:38.033293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.668 [2024-05-15 01:09:38.033308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.668 [2024-05-15 01:09:38.033320] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.668 [2024-05-15 01:09:38.033348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.668 qpair failed and we were unable to recover it. 00:22:25.668 [2024-05-15 01:09:38.043179] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.668 [2024-05-15 01:09:38.043353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.668 [2024-05-15 01:09:38.043378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.668 [2024-05-15 01:09:38.043393] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.668 [2024-05-15 01:09:38.043405] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.668 [2024-05-15 01:09:38.043434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.668 qpair failed and we were unable to recover it. 00:22:25.668 [2024-05-15 01:09:38.053202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.668 [2024-05-15 01:09:38.053402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.668 [2024-05-15 01:09:38.053427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.668 [2024-05-15 01:09:38.053442] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.668 [2024-05-15 01:09:38.053460] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.668 [2024-05-15 01:09:38.053488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.668 qpair failed and we were unable to recover it. 00:22:25.927 [2024-05-15 01:09:38.063204] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.927 [2024-05-15 01:09:38.063375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.928 [2024-05-15 01:09:38.063401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.928 [2024-05-15 01:09:38.063416] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.928 [2024-05-15 01:09:38.063428] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.928 [2024-05-15 01:09:38.063456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.928 qpair failed and we were unable to recover it. 00:22:25.928 [2024-05-15 01:09:38.073208] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.928 [2024-05-15 01:09:38.073365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.928 [2024-05-15 01:09:38.073390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.928 [2024-05-15 01:09:38.073405] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.928 [2024-05-15 01:09:38.073417] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.928 [2024-05-15 01:09:38.073444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.928 qpair failed and we were unable to recover it. 00:22:25.928 [2024-05-15 01:09:38.083294] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.928 [2024-05-15 01:09:38.083456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.928 [2024-05-15 01:09:38.083481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.928 [2024-05-15 01:09:38.083496] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.928 [2024-05-15 01:09:38.083508] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.928 [2024-05-15 01:09:38.083536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.928 qpair failed and we were unable to recover it. 00:22:25.928 [2024-05-15 01:09:38.093282] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.928 [2024-05-15 01:09:38.093496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.928 [2024-05-15 01:09:38.093521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.928 [2024-05-15 01:09:38.093536] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.928 [2024-05-15 01:09:38.093549] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.928 [2024-05-15 01:09:38.093576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.928 qpair failed and we were unable to recover it. 00:22:25.928 [2024-05-15 01:09:38.103297] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.928 [2024-05-15 01:09:38.103460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.928 [2024-05-15 01:09:38.103485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.928 [2024-05-15 01:09:38.103500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.928 [2024-05-15 01:09:38.103512] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.928 [2024-05-15 01:09:38.103540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.928 qpair failed and we were unable to recover it. 00:22:25.928 [2024-05-15 01:09:38.113320] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.928 [2024-05-15 01:09:38.113478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.928 [2024-05-15 01:09:38.113502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.928 [2024-05-15 01:09:38.113517] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.928 [2024-05-15 01:09:38.113529] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.928 [2024-05-15 01:09:38.113557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.928 qpair failed and we were unable to recover it. 00:22:25.928 [2024-05-15 01:09:38.123349] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.928 [2024-05-15 01:09:38.123504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.928 [2024-05-15 01:09:38.123530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.928 [2024-05-15 01:09:38.123544] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.928 [2024-05-15 01:09:38.123557] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.928 [2024-05-15 01:09:38.123584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.928 qpair failed and we were unable to recover it. 00:22:25.928 [2024-05-15 01:09:38.133409] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.928 [2024-05-15 01:09:38.133643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.928 [2024-05-15 01:09:38.133669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.928 [2024-05-15 01:09:38.133684] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.928 [2024-05-15 01:09:38.133696] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.928 [2024-05-15 01:09:38.133724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.928 qpair failed and we were unable to recover it. 00:22:25.928 [2024-05-15 01:09:38.143443] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.928 [2024-05-15 01:09:38.143618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.928 [2024-05-15 01:09:38.143643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.928 [2024-05-15 01:09:38.143667] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.928 [2024-05-15 01:09:38.143680] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.928 [2024-05-15 01:09:38.143709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.928 qpair failed and we were unable to recover it. 00:22:25.928 [2024-05-15 01:09:38.153476] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.928 [2024-05-15 01:09:38.153675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.928 [2024-05-15 01:09:38.153702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.928 [2024-05-15 01:09:38.153716] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.928 [2024-05-15 01:09:38.153732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.928 [2024-05-15 01:09:38.153762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.928 qpair failed and we were unable to recover it. 00:22:25.928 [2024-05-15 01:09:38.163518] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.928 [2024-05-15 01:09:38.163684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.928 [2024-05-15 01:09:38.163710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.928 [2024-05-15 01:09:38.163725] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.928 [2024-05-15 01:09:38.163737] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.928 [2024-05-15 01:09:38.163765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.928 qpair failed and we were unable to recover it. 00:22:25.928 [2024-05-15 01:09:38.173503] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.928 [2024-05-15 01:09:38.173668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.928 [2024-05-15 01:09:38.173693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.928 [2024-05-15 01:09:38.173708] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.928 [2024-05-15 01:09:38.173720] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.928 [2024-05-15 01:09:38.173748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.928 qpair failed and we were unable to recover it. 00:22:25.928 [2024-05-15 01:09:38.183530] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.928 [2024-05-15 01:09:38.183694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.928 [2024-05-15 01:09:38.183719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.928 [2024-05-15 01:09:38.183734] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.928 [2024-05-15 01:09:38.183746] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.928 [2024-05-15 01:09:38.183774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.928 qpair failed and we were unable to recover it. 00:22:25.928 [2024-05-15 01:09:38.193566] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.928 [2024-05-15 01:09:38.193728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.928 [2024-05-15 01:09:38.193753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.928 [2024-05-15 01:09:38.193768] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.928 [2024-05-15 01:09:38.193780] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.928 [2024-05-15 01:09:38.193808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.928 qpair failed and we were unable to recover it. 00:22:25.929 [2024-05-15 01:09:38.203575] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.929 [2024-05-15 01:09:38.203731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.929 [2024-05-15 01:09:38.203756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.929 [2024-05-15 01:09:38.203771] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.929 [2024-05-15 01:09:38.203783] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.929 [2024-05-15 01:09:38.203811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.929 qpair failed and we were unable to recover it. 00:22:25.929 [2024-05-15 01:09:38.213650] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.929 [2024-05-15 01:09:38.213832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.929 [2024-05-15 01:09:38.213857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.929 [2024-05-15 01:09:38.213872] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.929 [2024-05-15 01:09:38.213884] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.929 [2024-05-15 01:09:38.213912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.929 qpair failed and we were unable to recover it. 00:22:25.929 [2024-05-15 01:09:38.223645] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.929 [2024-05-15 01:09:38.223807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.929 [2024-05-15 01:09:38.223832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.929 [2024-05-15 01:09:38.223847] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.929 [2024-05-15 01:09:38.223859] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.929 [2024-05-15 01:09:38.223887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.929 qpair failed and we were unable to recover it. 00:22:25.929 [2024-05-15 01:09:38.233679] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.929 [2024-05-15 01:09:38.233849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.929 [2024-05-15 01:09:38.233874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.929 [2024-05-15 01:09:38.233895] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.929 [2024-05-15 01:09:38.233908] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.929 [2024-05-15 01:09:38.233942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.929 qpair failed and we were unable to recover it. 00:22:25.929 [2024-05-15 01:09:38.243694] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.929 [2024-05-15 01:09:38.243864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.929 [2024-05-15 01:09:38.243889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.929 [2024-05-15 01:09:38.243903] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.929 [2024-05-15 01:09:38.243916] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.929 [2024-05-15 01:09:38.243948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.929 qpair failed and we were unable to recover it. 00:22:25.929 [2024-05-15 01:09:38.253787] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.929 [2024-05-15 01:09:38.253959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.929 [2024-05-15 01:09:38.253984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.929 [2024-05-15 01:09:38.253999] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.929 [2024-05-15 01:09:38.254011] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.929 [2024-05-15 01:09:38.254040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.929 qpair failed and we were unable to recover it. 00:22:25.929 [2024-05-15 01:09:38.263798] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.929 [2024-05-15 01:09:38.263962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.929 [2024-05-15 01:09:38.263987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.929 [2024-05-15 01:09:38.264002] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.929 [2024-05-15 01:09:38.264014] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.929 [2024-05-15 01:09:38.264042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.929 qpair failed and we were unable to recover it. 00:22:25.929 [2024-05-15 01:09:38.273815] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.929 [2024-05-15 01:09:38.273997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.929 [2024-05-15 01:09:38.274022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.929 [2024-05-15 01:09:38.274037] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.929 [2024-05-15 01:09:38.274049] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.929 [2024-05-15 01:09:38.274078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.929 qpair failed and we were unable to recover it. 00:22:25.929 [2024-05-15 01:09:38.283847] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.929 [2024-05-15 01:09:38.284013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.929 [2024-05-15 01:09:38.284038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.929 [2024-05-15 01:09:38.284052] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.929 [2024-05-15 01:09:38.284065] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.929 [2024-05-15 01:09:38.284093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.929 qpair failed and we were unable to recover it. 00:22:25.929 [2024-05-15 01:09:38.293849] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.929 [2024-05-15 01:09:38.294026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.929 [2024-05-15 01:09:38.294052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.929 [2024-05-15 01:09:38.294067] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.929 [2024-05-15 01:09:38.294079] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.929 [2024-05-15 01:09:38.294107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.929 qpair failed and we were unable to recover it. 00:22:25.929 [2024-05-15 01:09:38.303916] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.929 [2024-05-15 01:09:38.304113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.929 [2024-05-15 01:09:38.304138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.929 [2024-05-15 01:09:38.304153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.929 [2024-05-15 01:09:38.304165] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.929 [2024-05-15 01:09:38.304194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.929 qpair failed and we were unable to recover it. 00:22:25.929 [2024-05-15 01:09:38.313901] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:25.929 [2024-05-15 01:09:38.314071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:25.929 [2024-05-15 01:09:38.314096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:25.929 [2024-05-15 01:09:38.314110] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:25.929 [2024-05-15 01:09:38.314123] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:25.929 [2024-05-15 01:09:38.314152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:25.929 qpair failed and we were unable to recover it. 00:22:26.191 [2024-05-15 01:09:38.323949] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.191 [2024-05-15 01:09:38.324115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.191 [2024-05-15 01:09:38.324148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.191 [2024-05-15 01:09:38.324164] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.191 [2024-05-15 01:09:38.324177] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.191 [2024-05-15 01:09:38.324205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.191 qpair failed and we were unable to recover it. 00:22:26.191 [2024-05-15 01:09:38.333993] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.191 [2024-05-15 01:09:38.334223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.191 [2024-05-15 01:09:38.334250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.191 [2024-05-15 01:09:38.334265] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.191 [2024-05-15 01:09:38.334278] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.191 [2024-05-15 01:09:38.334307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.191 qpair failed and we were unable to recover it. 00:22:26.191 [2024-05-15 01:09:38.343997] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.191 [2024-05-15 01:09:38.344203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.191 [2024-05-15 01:09:38.344229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.191 [2024-05-15 01:09:38.344244] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.191 [2024-05-15 01:09:38.344257] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.191 [2024-05-15 01:09:38.344285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.191 qpair failed and we were unable to recover it. 00:22:26.191 [2024-05-15 01:09:38.354072] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.191 [2024-05-15 01:09:38.354235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.191 [2024-05-15 01:09:38.354261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.191 [2024-05-15 01:09:38.354276] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.191 [2024-05-15 01:09:38.354288] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.191 [2024-05-15 01:09:38.354316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.191 qpair failed and we were unable to recover it. 00:22:26.191 [2024-05-15 01:09:38.364058] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.191 [2024-05-15 01:09:38.364222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.191 [2024-05-15 01:09:38.364248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.191 [2024-05-15 01:09:38.364262] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.191 [2024-05-15 01:09:38.364275] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.191 [2024-05-15 01:09:38.364303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.191 qpair failed and we were unable to recover it. 00:22:26.191 [2024-05-15 01:09:38.374093] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.191 [2024-05-15 01:09:38.374293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.191 [2024-05-15 01:09:38.374318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.191 [2024-05-15 01:09:38.374333] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.191 [2024-05-15 01:09:38.374346] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.191 [2024-05-15 01:09:38.374373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.191 qpair failed and we were unable to recover it. 00:22:26.191 [2024-05-15 01:09:38.384112] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.191 [2024-05-15 01:09:38.384317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.191 [2024-05-15 01:09:38.384343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.191 [2024-05-15 01:09:38.384358] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.191 [2024-05-15 01:09:38.384370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.191 [2024-05-15 01:09:38.384398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.191 qpair failed and we were unable to recover it. 00:22:26.191 [2024-05-15 01:09:38.394213] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.191 [2024-05-15 01:09:38.394418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.191 [2024-05-15 01:09:38.394443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.191 [2024-05-15 01:09:38.394458] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.191 [2024-05-15 01:09:38.394471] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.191 [2024-05-15 01:09:38.394499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.191 qpair failed and we were unable to recover it. 00:22:26.191 [2024-05-15 01:09:38.404163] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.191 [2024-05-15 01:09:38.404379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.191 [2024-05-15 01:09:38.404404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.191 [2024-05-15 01:09:38.404419] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.191 [2024-05-15 01:09:38.404432] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.191 [2024-05-15 01:09:38.404459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.191 qpair failed and we were unable to recover it. 00:22:26.191 [2024-05-15 01:09:38.414233] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.191 [2024-05-15 01:09:38.414427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.191 [2024-05-15 01:09:38.414458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.191 [2024-05-15 01:09:38.414473] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.191 [2024-05-15 01:09:38.414486] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.191 [2024-05-15 01:09:38.414514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.191 qpair failed and we were unable to recover it. 00:22:26.192 [2024-05-15 01:09:38.424231] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.192 [2024-05-15 01:09:38.424394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.192 [2024-05-15 01:09:38.424420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.192 [2024-05-15 01:09:38.424434] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.192 [2024-05-15 01:09:38.424447] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.192 [2024-05-15 01:09:38.424475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.192 qpair failed and we were unable to recover it. 00:22:26.192 [2024-05-15 01:09:38.434236] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.192 [2024-05-15 01:09:38.434391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.192 [2024-05-15 01:09:38.434417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.192 [2024-05-15 01:09:38.434432] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.192 [2024-05-15 01:09:38.434444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.192 [2024-05-15 01:09:38.434472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.192 qpair failed and we were unable to recover it. 00:22:26.192 [2024-05-15 01:09:38.444305] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.192 [2024-05-15 01:09:38.444471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.192 [2024-05-15 01:09:38.444502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.192 [2024-05-15 01:09:38.444519] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.192 [2024-05-15 01:09:38.444532] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.192 [2024-05-15 01:09:38.444562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.192 qpair failed and we were unable to recover it. 00:22:26.192 [2024-05-15 01:09:38.454310] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.192 [2024-05-15 01:09:38.454507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.192 [2024-05-15 01:09:38.454533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.192 [2024-05-15 01:09:38.454548] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.192 [2024-05-15 01:09:38.454560] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.192 [2024-05-15 01:09:38.454593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.192 qpair failed and we were unable to recover it. 00:22:26.192 [2024-05-15 01:09:38.464373] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.192 [2024-05-15 01:09:38.464535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.192 [2024-05-15 01:09:38.464559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.192 [2024-05-15 01:09:38.464573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.192 [2024-05-15 01:09:38.464585] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.192 [2024-05-15 01:09:38.464612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.192 qpair failed and we were unable to recover it. 00:22:26.192 [2024-05-15 01:09:38.474393] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.192 [2024-05-15 01:09:38.474555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.192 [2024-05-15 01:09:38.474580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.192 [2024-05-15 01:09:38.474595] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.192 [2024-05-15 01:09:38.474607] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.192 [2024-05-15 01:09:38.474635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.192 qpair failed and we were unable to recover it. 00:22:26.192 [2024-05-15 01:09:38.484387] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.192 [2024-05-15 01:09:38.484545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.192 [2024-05-15 01:09:38.484570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.192 [2024-05-15 01:09:38.484584] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.192 [2024-05-15 01:09:38.484596] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.192 [2024-05-15 01:09:38.484624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.192 qpair failed and we were unable to recover it. 00:22:26.192 [2024-05-15 01:09:38.494406] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.192 [2024-05-15 01:09:38.494569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.192 [2024-05-15 01:09:38.494595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.192 [2024-05-15 01:09:38.494609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.192 [2024-05-15 01:09:38.494621] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.192 [2024-05-15 01:09:38.494649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.192 qpair failed and we were unable to recover it. 00:22:26.192 [2024-05-15 01:09:38.504427] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.192 [2024-05-15 01:09:38.504602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.192 [2024-05-15 01:09:38.504632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.192 [2024-05-15 01:09:38.504647] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.192 [2024-05-15 01:09:38.504660] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.192 [2024-05-15 01:09:38.504687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.192 qpair failed and we were unable to recover it. 00:22:26.192 [2024-05-15 01:09:38.514460] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.192 [2024-05-15 01:09:38.514622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.192 [2024-05-15 01:09:38.514647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.192 [2024-05-15 01:09:38.514662] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.192 [2024-05-15 01:09:38.514674] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.192 [2024-05-15 01:09:38.514701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.192 qpair failed and we were unable to recover it. 00:22:26.192 [2024-05-15 01:09:38.524519] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.192 [2024-05-15 01:09:38.524687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.192 [2024-05-15 01:09:38.524712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.192 [2024-05-15 01:09:38.524727] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.192 [2024-05-15 01:09:38.524739] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.192 [2024-05-15 01:09:38.524766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.192 qpair failed and we were unable to recover it. 00:22:26.192 [2024-05-15 01:09:38.534552] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.192 [2024-05-15 01:09:38.534729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.192 [2024-05-15 01:09:38.534754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.192 [2024-05-15 01:09:38.534769] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.192 [2024-05-15 01:09:38.534781] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.192 [2024-05-15 01:09:38.534809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.192 qpair failed and we were unable to recover it. 00:22:26.192 [2024-05-15 01:09:38.544556] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.192 [2024-05-15 01:09:38.544733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.192 [2024-05-15 01:09:38.544758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.192 [2024-05-15 01:09:38.544773] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.192 [2024-05-15 01:09:38.544786] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.192 [2024-05-15 01:09:38.544819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.192 qpair failed and we were unable to recover it. 00:22:26.192 [2024-05-15 01:09:38.554630] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.192 [2024-05-15 01:09:38.554801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.192 [2024-05-15 01:09:38.554826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.192 [2024-05-15 01:09:38.554840] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.192 [2024-05-15 01:09:38.554853] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.192 [2024-05-15 01:09:38.554881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.193 qpair failed and we were unable to recover it. 00:22:26.193 [2024-05-15 01:09:38.564643] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.193 [2024-05-15 01:09:38.564810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.193 [2024-05-15 01:09:38.564835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.193 [2024-05-15 01:09:38.564850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.193 [2024-05-15 01:09:38.564863] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.193 [2024-05-15 01:09:38.564890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.193 qpair failed and we were unable to recover it. 00:22:26.193 [2024-05-15 01:09:38.574644] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.193 [2024-05-15 01:09:38.574808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.193 [2024-05-15 01:09:38.574833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.193 [2024-05-15 01:09:38.574847] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.193 [2024-05-15 01:09:38.574859] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.193 [2024-05-15 01:09:38.574887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.193 qpair failed and we were unable to recover it. 00:22:26.452 [2024-05-15 01:09:38.584675] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.452 [2024-05-15 01:09:38.584832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.452 [2024-05-15 01:09:38.584858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.452 [2024-05-15 01:09:38.584872] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.452 [2024-05-15 01:09:38.584885] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.452 [2024-05-15 01:09:38.584913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.452 qpair failed and we were unable to recover it. 00:22:26.452 [2024-05-15 01:09:38.594700] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.452 [2024-05-15 01:09:38.594861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.452 [2024-05-15 01:09:38.594892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.452 [2024-05-15 01:09:38.594908] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.452 [2024-05-15 01:09:38.594920] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.452 [2024-05-15 01:09:38.594956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.452 qpair failed and we were unable to recover it. 00:22:26.452 [2024-05-15 01:09:38.604744] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.452 [2024-05-15 01:09:38.604948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.452 [2024-05-15 01:09:38.604974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.452 [2024-05-15 01:09:38.604989] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.452 [2024-05-15 01:09:38.605002] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.452 [2024-05-15 01:09:38.605030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.452 qpair failed and we were unable to recover it. 00:22:26.452 [2024-05-15 01:09:38.614804] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.452 [2024-05-15 01:09:38.614974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.452 [2024-05-15 01:09:38.614999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.452 [2024-05-15 01:09:38.615014] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.452 [2024-05-15 01:09:38.615026] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.452 [2024-05-15 01:09:38.615054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.452 qpair failed and we were unable to recover it. 00:22:26.452 [2024-05-15 01:09:38.624781] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.452 [2024-05-15 01:09:38.624959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.453 [2024-05-15 01:09:38.624984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.453 [2024-05-15 01:09:38.624999] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.453 [2024-05-15 01:09:38.625011] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.453 [2024-05-15 01:09:38.625039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.453 qpair failed and we were unable to recover it. 00:22:26.453 [2024-05-15 01:09:38.634835] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.453 [2024-05-15 01:09:38.635033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.453 [2024-05-15 01:09:38.635058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.453 [2024-05-15 01:09:38.635073] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.453 [2024-05-15 01:09:38.635091] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.453 [2024-05-15 01:09:38.635120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.453 qpair failed and we were unable to recover it. 00:22:26.453 [2024-05-15 01:09:38.644867] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.453 [2024-05-15 01:09:38.645064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.453 [2024-05-15 01:09:38.645090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.453 [2024-05-15 01:09:38.645105] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.453 [2024-05-15 01:09:38.645117] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.453 [2024-05-15 01:09:38.645145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.453 qpair failed and we were unable to recover it. 00:22:26.453 [2024-05-15 01:09:38.654886] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.453 [2024-05-15 01:09:38.655097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.453 [2024-05-15 01:09:38.655122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.453 [2024-05-15 01:09:38.655137] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.453 [2024-05-15 01:09:38.655149] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.453 [2024-05-15 01:09:38.655176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.453 qpair failed and we were unable to recover it. 00:22:26.453 [2024-05-15 01:09:38.664902] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.453 [2024-05-15 01:09:38.665074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.453 [2024-05-15 01:09:38.665100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.453 [2024-05-15 01:09:38.665115] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.453 [2024-05-15 01:09:38.665127] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.453 [2024-05-15 01:09:38.665155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.453 qpair failed and we were unable to recover it. 00:22:26.453 [2024-05-15 01:09:38.674946] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.453 [2024-05-15 01:09:38.675104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.453 [2024-05-15 01:09:38.675130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.453 [2024-05-15 01:09:38.675145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.453 [2024-05-15 01:09:38.675157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.453 [2024-05-15 01:09:38.675186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.453 qpair failed and we were unable to recover it. 00:22:26.453 [2024-05-15 01:09:38.684972] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.453 [2024-05-15 01:09:38.685159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.453 [2024-05-15 01:09:38.685185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.453 [2024-05-15 01:09:38.685200] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.453 [2024-05-15 01:09:38.685212] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.453 [2024-05-15 01:09:38.685240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.453 qpair failed and we were unable to recover it. 00:22:26.453 [2024-05-15 01:09:38.695035] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.453 [2024-05-15 01:09:38.695200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.453 [2024-05-15 01:09:38.695225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.453 [2024-05-15 01:09:38.695240] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.453 [2024-05-15 01:09:38.695252] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.453 [2024-05-15 01:09:38.695280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.453 qpair failed and we were unable to recover it. 00:22:26.453 [2024-05-15 01:09:38.705020] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.453 [2024-05-15 01:09:38.705186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.453 [2024-05-15 01:09:38.705211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.453 [2024-05-15 01:09:38.705226] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.453 [2024-05-15 01:09:38.705239] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.453 [2024-05-15 01:09:38.705267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.453 qpair failed and we were unable to recover it. 00:22:26.453 [2024-05-15 01:09:38.715044] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.453 [2024-05-15 01:09:38.715201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.453 [2024-05-15 01:09:38.715226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.453 [2024-05-15 01:09:38.715241] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.453 [2024-05-15 01:09:38.715253] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.453 [2024-05-15 01:09:38.715281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.453 qpair failed and we were unable to recover it. 00:22:26.453 [2024-05-15 01:09:38.725091] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.453 [2024-05-15 01:09:38.725253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.453 [2024-05-15 01:09:38.725279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.453 [2024-05-15 01:09:38.725294] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.453 [2024-05-15 01:09:38.725312] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.453 [2024-05-15 01:09:38.725341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.453 qpair failed and we were unable to recover it. 00:22:26.453 [2024-05-15 01:09:38.735147] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.453 [2024-05-15 01:09:38.735348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.453 [2024-05-15 01:09:38.735374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.453 [2024-05-15 01:09:38.735390] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.453 [2024-05-15 01:09:38.735406] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.453 [2024-05-15 01:09:38.735435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.453 qpair failed and we were unable to recover it. 00:22:26.453 [2024-05-15 01:09:38.745160] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.453 [2024-05-15 01:09:38.745321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.453 [2024-05-15 01:09:38.745347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.453 [2024-05-15 01:09:38.745362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.453 [2024-05-15 01:09:38.745374] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.453 [2024-05-15 01:09:38.745401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.453 qpair failed and we were unable to recover it. 00:22:26.453 [2024-05-15 01:09:38.755185] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.453 [2024-05-15 01:09:38.755423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.453 [2024-05-15 01:09:38.755448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.453 [2024-05-15 01:09:38.755463] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.453 [2024-05-15 01:09:38.755475] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.453 [2024-05-15 01:09:38.755503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.453 qpair failed and we were unable to recover it. 00:22:26.453 [2024-05-15 01:09:38.765221] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.454 [2024-05-15 01:09:38.765388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.454 [2024-05-15 01:09:38.765414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.454 [2024-05-15 01:09:38.765429] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.454 [2024-05-15 01:09:38.765441] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.454 [2024-05-15 01:09:38.765469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.454 qpair failed and we were unable to recover it. 00:22:26.454 [2024-05-15 01:09:38.775238] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.454 [2024-05-15 01:09:38.775447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.454 [2024-05-15 01:09:38.775472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.454 [2024-05-15 01:09:38.775486] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.454 [2024-05-15 01:09:38.775499] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.454 [2024-05-15 01:09:38.775527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.454 qpair failed and we were unable to recover it. 00:22:26.454 [2024-05-15 01:09:38.785268] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.454 [2024-05-15 01:09:38.785454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.454 [2024-05-15 01:09:38.785479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.454 [2024-05-15 01:09:38.785494] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.454 [2024-05-15 01:09:38.785506] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.454 [2024-05-15 01:09:38.785534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.454 qpair failed and we were unable to recover it. 00:22:26.454 [2024-05-15 01:09:38.795374] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.454 [2024-05-15 01:09:38.795552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.454 [2024-05-15 01:09:38.795578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.454 [2024-05-15 01:09:38.795593] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.454 [2024-05-15 01:09:38.795604] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.454 [2024-05-15 01:09:38.795632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.454 qpair failed and we were unable to recover it. 00:22:26.454 [2024-05-15 01:09:38.805339] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.454 [2024-05-15 01:09:38.805499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.454 [2024-05-15 01:09:38.805524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.454 [2024-05-15 01:09:38.805539] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.454 [2024-05-15 01:09:38.805551] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.454 [2024-05-15 01:09:38.805579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.454 qpair failed and we were unable to recover it. 00:22:26.454 [2024-05-15 01:09:38.815360] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.454 [2024-05-15 01:09:38.815528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.454 [2024-05-15 01:09:38.815553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.454 [2024-05-15 01:09:38.815568] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.454 [2024-05-15 01:09:38.815586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.454 [2024-05-15 01:09:38.815615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.454 qpair failed and we were unable to recover it. 00:22:26.454 [2024-05-15 01:09:38.825383] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.454 [2024-05-15 01:09:38.825557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.454 [2024-05-15 01:09:38.825581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.454 [2024-05-15 01:09:38.825596] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.454 [2024-05-15 01:09:38.825608] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.454 [2024-05-15 01:09:38.825636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.454 qpair failed and we were unable to recover it. 00:22:26.454 [2024-05-15 01:09:38.835417] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.454 [2024-05-15 01:09:38.835599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.454 [2024-05-15 01:09:38.835624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.454 [2024-05-15 01:09:38.835638] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.454 [2024-05-15 01:09:38.835651] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.454 [2024-05-15 01:09:38.835678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.454 qpair failed and we were unable to recover it. 00:22:26.713 [2024-05-15 01:09:38.845455] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.713 [2024-05-15 01:09:38.845619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.713 [2024-05-15 01:09:38.845646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.713 [2024-05-15 01:09:38.845661] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.713 [2024-05-15 01:09:38.845673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.713 [2024-05-15 01:09:38.845701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.713 qpair failed and we were unable to recover it. 00:22:26.713 [2024-05-15 01:09:38.855450] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.713 [2024-05-15 01:09:38.855620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.713 [2024-05-15 01:09:38.855646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.713 [2024-05-15 01:09:38.855661] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.713 [2024-05-15 01:09:38.855673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.713 [2024-05-15 01:09:38.855701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.713 qpair failed and we were unable to recover it. 00:22:26.713 [2024-05-15 01:09:38.865486] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.713 [2024-05-15 01:09:38.865650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.713 [2024-05-15 01:09:38.865675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.713 [2024-05-15 01:09:38.865690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.713 [2024-05-15 01:09:38.865702] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.713 [2024-05-15 01:09:38.865730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.713 qpair failed and we were unable to recover it. 00:22:26.713 [2024-05-15 01:09:38.875504] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.714 [2024-05-15 01:09:38.875666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.714 [2024-05-15 01:09:38.875692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.714 [2024-05-15 01:09:38.875707] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.714 [2024-05-15 01:09:38.875719] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.714 [2024-05-15 01:09:38.875747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.714 qpair failed and we were unable to recover it. 00:22:26.714 [2024-05-15 01:09:38.885524] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.714 [2024-05-15 01:09:38.885681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.714 [2024-05-15 01:09:38.885706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.714 [2024-05-15 01:09:38.885721] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.714 [2024-05-15 01:09:38.885733] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.714 [2024-05-15 01:09:38.885761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.714 qpair failed and we were unable to recover it. 00:22:26.714 [2024-05-15 01:09:38.895581] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.714 [2024-05-15 01:09:38.895747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.714 [2024-05-15 01:09:38.895773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.714 [2024-05-15 01:09:38.895788] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.714 [2024-05-15 01:09:38.895800] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.714 [2024-05-15 01:09:38.895828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.714 qpair failed and we were unable to recover it. 00:22:26.714 [2024-05-15 01:09:38.905607] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.714 [2024-05-15 01:09:38.905773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.714 [2024-05-15 01:09:38.905799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.714 [2024-05-15 01:09:38.905819] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.714 [2024-05-15 01:09:38.905832] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.714 [2024-05-15 01:09:38.905860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.714 qpair failed and we were unable to recover it. 00:22:26.714 [2024-05-15 01:09:38.915651] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.714 [2024-05-15 01:09:38.915855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.714 [2024-05-15 01:09:38.915881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.714 [2024-05-15 01:09:38.915896] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.714 [2024-05-15 01:09:38.915909] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.714 [2024-05-15 01:09:38.915944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.714 qpair failed and we were unable to recover it. 00:22:26.714 [2024-05-15 01:09:38.925660] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.714 [2024-05-15 01:09:38.925820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.714 [2024-05-15 01:09:38.925845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.714 [2024-05-15 01:09:38.925860] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.714 [2024-05-15 01:09:38.925873] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.714 [2024-05-15 01:09:38.925900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.714 qpair failed and we were unable to recover it. 00:22:26.714 [2024-05-15 01:09:38.935665] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.714 [2024-05-15 01:09:38.935831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.714 [2024-05-15 01:09:38.935857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.714 [2024-05-15 01:09:38.935872] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.714 [2024-05-15 01:09:38.935885] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.714 [2024-05-15 01:09:38.935912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.714 qpair failed and we were unable to recover it. 00:22:26.714 [2024-05-15 01:09:38.945685] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.714 [2024-05-15 01:09:38.945865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.714 [2024-05-15 01:09:38.945890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.714 [2024-05-15 01:09:38.945905] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.714 [2024-05-15 01:09:38.945938] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.714 [2024-05-15 01:09:38.945969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.714 qpair failed and we were unable to recover it. 00:22:26.714 [2024-05-15 01:09:38.955729] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.714 [2024-05-15 01:09:38.955892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.714 [2024-05-15 01:09:38.955918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.714 [2024-05-15 01:09:38.955941] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.714 [2024-05-15 01:09:38.955955] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.714 [2024-05-15 01:09:38.955983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.714 qpair failed and we were unable to recover it. 00:22:26.714 [2024-05-15 01:09:38.965780] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.714 [2024-05-15 01:09:38.965958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.714 [2024-05-15 01:09:38.965984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.714 [2024-05-15 01:09:38.965998] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.714 [2024-05-15 01:09:38.966010] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.714 [2024-05-15 01:09:38.966038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.714 qpair failed and we were unable to recover it. 00:22:26.714 [2024-05-15 01:09:38.975865] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.714 [2024-05-15 01:09:38.976058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.714 [2024-05-15 01:09:38.976084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.714 [2024-05-15 01:09:38.976099] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.714 [2024-05-15 01:09:38.976111] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.714 [2024-05-15 01:09:38.976139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.714 qpair failed and we were unable to recover it. 00:22:26.714 [2024-05-15 01:09:38.985820] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.714 [2024-05-15 01:09:38.985985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.714 [2024-05-15 01:09:38.986011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.714 [2024-05-15 01:09:38.986026] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.714 [2024-05-15 01:09:38.986038] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.714 [2024-05-15 01:09:38.986067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.714 qpair failed and we were unable to recover it. 00:22:26.714 [2024-05-15 01:09:38.995857] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.714 [2024-05-15 01:09:38.996058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.714 [2024-05-15 01:09:38.996084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.714 [2024-05-15 01:09:38.996104] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.714 [2024-05-15 01:09:38.996117] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.714 [2024-05-15 01:09:38.996145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.714 qpair failed and we were unable to recover it. 00:22:26.714 [2024-05-15 01:09:39.005881] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.714 [2024-05-15 01:09:39.006077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.714 [2024-05-15 01:09:39.006103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.714 [2024-05-15 01:09:39.006117] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.714 [2024-05-15 01:09:39.006129] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.714 [2024-05-15 01:09:39.006157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.714 qpair failed and we were unable to recover it. 00:22:26.714 [2024-05-15 01:09:39.015924] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.715 [2024-05-15 01:09:39.016102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.715 [2024-05-15 01:09:39.016127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.715 [2024-05-15 01:09:39.016141] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.715 [2024-05-15 01:09:39.016154] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.715 [2024-05-15 01:09:39.016182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.715 qpair failed and we were unable to recover it. 00:22:26.715 [2024-05-15 01:09:39.025976] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.715 [2024-05-15 01:09:39.026144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.715 [2024-05-15 01:09:39.026169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.715 [2024-05-15 01:09:39.026184] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.715 [2024-05-15 01:09:39.026195] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.715 [2024-05-15 01:09:39.026223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.715 qpair failed and we were unable to recover it. 00:22:26.715 [2024-05-15 01:09:39.035961] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.715 [2024-05-15 01:09:39.036115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.715 [2024-05-15 01:09:39.036141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.715 [2024-05-15 01:09:39.036156] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.715 [2024-05-15 01:09:39.036168] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.715 [2024-05-15 01:09:39.036196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.715 qpair failed and we were unable to recover it. 00:22:26.715 [2024-05-15 01:09:39.046027] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.715 [2024-05-15 01:09:39.046227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.715 [2024-05-15 01:09:39.046252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.715 [2024-05-15 01:09:39.046267] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.715 [2024-05-15 01:09:39.046279] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.715 [2024-05-15 01:09:39.046307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.715 qpair failed and we were unable to recover it. 00:22:26.715 [2024-05-15 01:09:39.056079] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.715 [2024-05-15 01:09:39.056320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.715 [2024-05-15 01:09:39.056345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.715 [2024-05-15 01:09:39.056360] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.715 [2024-05-15 01:09:39.056372] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.715 [2024-05-15 01:09:39.056399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.715 qpair failed and we were unable to recover it. 00:22:26.715 [2024-05-15 01:09:39.066066] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.715 [2024-05-15 01:09:39.066241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.715 [2024-05-15 01:09:39.066266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.715 [2024-05-15 01:09:39.066280] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.715 [2024-05-15 01:09:39.066293] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.715 [2024-05-15 01:09:39.066320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.715 qpair failed and we were unable to recover it. 00:22:26.715 [2024-05-15 01:09:39.076081] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.715 [2024-05-15 01:09:39.076264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.715 [2024-05-15 01:09:39.076291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.715 [2024-05-15 01:09:39.076310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.715 [2024-05-15 01:09:39.076322] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.715 [2024-05-15 01:09:39.076351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.715 qpair failed and we were unable to recover it. 00:22:26.715 [2024-05-15 01:09:39.086130] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.715 [2024-05-15 01:09:39.086301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.715 [2024-05-15 01:09:39.086328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.715 [2024-05-15 01:09:39.086350] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.715 [2024-05-15 01:09:39.086364] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.715 [2024-05-15 01:09:39.086392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.715 qpair failed and we were unable to recover it. 00:22:26.715 [2024-05-15 01:09:39.096144] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.715 [2024-05-15 01:09:39.096358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.715 [2024-05-15 01:09:39.096383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.715 [2024-05-15 01:09:39.096397] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.715 [2024-05-15 01:09:39.096409] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.715 [2024-05-15 01:09:39.096437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.715 qpair failed and we were unable to recover it. 00:22:26.974 [2024-05-15 01:09:39.106158] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.974 [2024-05-15 01:09:39.106316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.974 [2024-05-15 01:09:39.106342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.974 [2024-05-15 01:09:39.106357] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.974 [2024-05-15 01:09:39.106369] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.974 [2024-05-15 01:09:39.106397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.974 qpair failed and we were unable to recover it. 00:22:26.974 [2024-05-15 01:09:39.116200] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.974 [2024-05-15 01:09:39.116353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.974 [2024-05-15 01:09:39.116379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.974 [2024-05-15 01:09:39.116394] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.974 [2024-05-15 01:09:39.116406] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.974 [2024-05-15 01:09:39.116434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.974 qpair failed and we were unable to recover it. 00:22:26.974 [2024-05-15 01:09:39.126238] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.974 [2024-05-15 01:09:39.126411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.974 [2024-05-15 01:09:39.126436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.974 [2024-05-15 01:09:39.126450] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.974 [2024-05-15 01:09:39.126463] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.974 [2024-05-15 01:09:39.126490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.974 qpair failed and we were unable to recover it. 00:22:26.974 [2024-05-15 01:09:39.136284] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.974 [2024-05-15 01:09:39.136459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.974 [2024-05-15 01:09:39.136484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.974 [2024-05-15 01:09:39.136498] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.974 [2024-05-15 01:09:39.136510] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.974 [2024-05-15 01:09:39.136538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.974 qpair failed and we were unable to recover it. 00:22:26.974 [2024-05-15 01:09:39.146270] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.974 [2024-05-15 01:09:39.146431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.974 [2024-05-15 01:09:39.146456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.974 [2024-05-15 01:09:39.146471] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.975 [2024-05-15 01:09:39.146483] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.975 [2024-05-15 01:09:39.146510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.975 qpair failed and we were unable to recover it. 00:22:26.975 [2024-05-15 01:09:39.156361] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.975 [2024-05-15 01:09:39.156526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.975 [2024-05-15 01:09:39.156553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.975 [2024-05-15 01:09:39.156572] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.975 [2024-05-15 01:09:39.156585] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.975 [2024-05-15 01:09:39.156614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.975 qpair failed and we were unable to recover it. 00:22:26.975 [2024-05-15 01:09:39.166376] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.975 [2024-05-15 01:09:39.166557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.975 [2024-05-15 01:09:39.166583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.975 [2024-05-15 01:09:39.166598] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.975 [2024-05-15 01:09:39.166610] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.975 [2024-05-15 01:09:39.166638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.975 qpair failed and we were unable to recover it. 00:22:26.975 [2024-05-15 01:09:39.176374] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.975 [2024-05-15 01:09:39.176548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.975 [2024-05-15 01:09:39.176579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.975 [2024-05-15 01:09:39.176595] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.975 [2024-05-15 01:09:39.176607] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.975 [2024-05-15 01:09:39.176635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.975 qpair failed and we were unable to recover it. 00:22:26.975 [2024-05-15 01:09:39.186428] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.975 [2024-05-15 01:09:39.186598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.975 [2024-05-15 01:09:39.186623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.975 [2024-05-15 01:09:39.186638] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.975 [2024-05-15 01:09:39.186650] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.975 [2024-05-15 01:09:39.186678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.975 qpair failed and we were unable to recover it. 00:22:26.975 [2024-05-15 01:09:39.196456] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.975 [2024-05-15 01:09:39.196623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.975 [2024-05-15 01:09:39.196648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.975 [2024-05-15 01:09:39.196663] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.975 [2024-05-15 01:09:39.196675] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.975 [2024-05-15 01:09:39.196703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.975 qpair failed and we were unable to recover it. 00:22:26.975 [2024-05-15 01:09:39.206489] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.975 [2024-05-15 01:09:39.206648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.975 [2024-05-15 01:09:39.206672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.975 [2024-05-15 01:09:39.206687] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.975 [2024-05-15 01:09:39.206699] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.975 [2024-05-15 01:09:39.206727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.975 qpair failed and we were unable to recover it. 00:22:26.975 [2024-05-15 01:09:39.216481] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.975 [2024-05-15 01:09:39.216644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.975 [2024-05-15 01:09:39.216668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.975 [2024-05-15 01:09:39.216683] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.975 [2024-05-15 01:09:39.216696] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.975 [2024-05-15 01:09:39.216728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.975 qpair failed and we were unable to recover it. 00:22:26.975 [2024-05-15 01:09:39.226495] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.975 [2024-05-15 01:09:39.226700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.975 [2024-05-15 01:09:39.226724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.975 [2024-05-15 01:09:39.226739] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.975 [2024-05-15 01:09:39.226751] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.975 [2024-05-15 01:09:39.226779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.975 qpair failed and we were unable to recover it. 00:22:26.975 [2024-05-15 01:09:39.236554] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.975 [2024-05-15 01:09:39.236753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.975 [2024-05-15 01:09:39.236778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.975 [2024-05-15 01:09:39.236793] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.975 [2024-05-15 01:09:39.236805] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.975 [2024-05-15 01:09:39.236832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.975 qpair failed and we were unable to recover it. 00:22:26.975 [2024-05-15 01:09:39.246545] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.976 [2024-05-15 01:09:39.246708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.976 [2024-05-15 01:09:39.246733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.976 [2024-05-15 01:09:39.246748] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.976 [2024-05-15 01:09:39.246760] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.976 [2024-05-15 01:09:39.246787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.976 qpair failed and we were unable to recover it. 00:22:26.976 [2024-05-15 01:09:39.256616] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.976 [2024-05-15 01:09:39.256785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.976 [2024-05-15 01:09:39.256811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.976 [2024-05-15 01:09:39.256826] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.976 [2024-05-15 01:09:39.256838] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.976 [2024-05-15 01:09:39.256865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.976 qpair failed and we were unable to recover it. 00:22:26.976 [2024-05-15 01:09:39.266594] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.976 [2024-05-15 01:09:39.266763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.976 [2024-05-15 01:09:39.266793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.976 [2024-05-15 01:09:39.266808] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.976 [2024-05-15 01:09:39.266820] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.976 [2024-05-15 01:09:39.266848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.976 qpair failed and we were unable to recover it. 00:22:26.976 [2024-05-15 01:09:39.276663] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.976 [2024-05-15 01:09:39.276816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.976 [2024-05-15 01:09:39.276842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.976 [2024-05-15 01:09:39.276856] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.976 [2024-05-15 01:09:39.276869] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.976 [2024-05-15 01:09:39.276896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.976 qpair failed and we were unable to recover it. 00:22:26.976 [2024-05-15 01:09:39.286681] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.976 [2024-05-15 01:09:39.286839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.976 [2024-05-15 01:09:39.286864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.976 [2024-05-15 01:09:39.286879] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.976 [2024-05-15 01:09:39.286892] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.976 [2024-05-15 01:09:39.286920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.976 qpair failed and we were unable to recover it. 00:22:26.976 [2024-05-15 01:09:39.296708] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.976 [2024-05-15 01:09:39.296891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.976 [2024-05-15 01:09:39.296916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.976 [2024-05-15 01:09:39.296937] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.976 [2024-05-15 01:09:39.296951] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.976 [2024-05-15 01:09:39.296979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.976 qpair failed and we were unable to recover it. 00:22:26.976 [2024-05-15 01:09:39.306728] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.976 [2024-05-15 01:09:39.306889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.976 [2024-05-15 01:09:39.306914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.976 [2024-05-15 01:09:39.306935] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.976 [2024-05-15 01:09:39.306949] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.976 [2024-05-15 01:09:39.306982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.976 qpair failed and we were unable to recover it. 00:22:26.976 [2024-05-15 01:09:39.316773] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.976 [2024-05-15 01:09:39.316947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.976 [2024-05-15 01:09:39.316972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.976 [2024-05-15 01:09:39.316987] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.976 [2024-05-15 01:09:39.316999] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.976 [2024-05-15 01:09:39.317026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.976 qpair failed and we were unable to recover it. 00:22:26.976 [2024-05-15 01:09:39.326830] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.976 [2024-05-15 01:09:39.326993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.976 [2024-05-15 01:09:39.327018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.976 [2024-05-15 01:09:39.327033] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.976 [2024-05-15 01:09:39.327045] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.976 [2024-05-15 01:09:39.327074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.976 qpair failed and we were unable to recover it. 00:22:26.976 [2024-05-15 01:09:39.336852] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.976 [2024-05-15 01:09:39.337021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.976 [2024-05-15 01:09:39.337046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.976 [2024-05-15 01:09:39.337061] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.976 [2024-05-15 01:09:39.337073] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.976 [2024-05-15 01:09:39.337101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.976 qpair failed and we were unable to recover it. 00:22:26.976 [2024-05-15 01:09:39.346843] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.976 [2024-05-15 01:09:39.347020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.976 [2024-05-15 01:09:39.347045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.976 [2024-05-15 01:09:39.347060] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.976 [2024-05-15 01:09:39.347072] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.976 [2024-05-15 01:09:39.347101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.976 qpair failed and we were unable to recover it. 00:22:26.976 [2024-05-15 01:09:39.356876] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:26.976 [2024-05-15 01:09:39.357090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:26.976 [2024-05-15 01:09:39.357120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:26.976 [2024-05-15 01:09:39.357136] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.976 [2024-05-15 01:09:39.357148] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:26.977 [2024-05-15 01:09:39.357176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:26.977 qpair failed and we were unable to recover it. 00:22:27.236 [2024-05-15 01:09:39.366990] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.236 [2024-05-15 01:09:39.367227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.236 [2024-05-15 01:09:39.367252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.236 [2024-05-15 01:09:39.367267] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.236 [2024-05-15 01:09:39.367279] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.236 [2024-05-15 01:09:39.367307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.236 qpair failed and we were unable to recover it. 00:22:27.236 [2024-05-15 01:09:39.376958] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.236 [2024-05-15 01:09:39.377146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.236 [2024-05-15 01:09:39.377171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.236 [2024-05-15 01:09:39.377186] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.236 [2024-05-15 01:09:39.377198] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.236 [2024-05-15 01:09:39.377226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.236 qpair failed and we were unable to recover it. 00:22:27.236 [2024-05-15 01:09:39.387028] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.236 [2024-05-15 01:09:39.387226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.236 [2024-05-15 01:09:39.387253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.236 [2024-05-15 01:09:39.387268] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.236 [2024-05-15 01:09:39.387280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.236 [2024-05-15 01:09:39.387309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.236 qpair failed and we were unable to recover it. 00:22:27.236 [2024-05-15 01:09:39.397032] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.236 [2024-05-15 01:09:39.397211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.236 [2024-05-15 01:09:39.397238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.236 [2024-05-15 01:09:39.397253] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.236 [2024-05-15 01:09:39.397265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.236 [2024-05-15 01:09:39.397299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.236 qpair failed and we were unable to recover it. 00:22:27.236 [2024-05-15 01:09:39.407056] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.236 [2024-05-15 01:09:39.407215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.236 [2024-05-15 01:09:39.407240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.236 [2024-05-15 01:09:39.407255] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.236 [2024-05-15 01:09:39.407267] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.236 [2024-05-15 01:09:39.407296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.236 qpair failed and we were unable to recover it. 00:22:27.236 [2024-05-15 01:09:39.417084] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.236 [2024-05-15 01:09:39.417253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.236 [2024-05-15 01:09:39.417279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.236 [2024-05-15 01:09:39.417294] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.236 [2024-05-15 01:09:39.417307] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.236 [2024-05-15 01:09:39.417335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.236 qpair failed and we were unable to recover it. 00:22:27.236 [2024-05-15 01:09:39.427075] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.236 [2024-05-15 01:09:39.427233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.236 [2024-05-15 01:09:39.427257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.236 [2024-05-15 01:09:39.427271] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.236 [2024-05-15 01:09:39.427283] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.236 [2024-05-15 01:09:39.427312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.236 qpair failed and we were unable to recover it. 00:22:27.236 [2024-05-15 01:09:39.437105] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.236 [2024-05-15 01:09:39.437261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.236 [2024-05-15 01:09:39.437285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.236 [2024-05-15 01:09:39.437300] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.236 [2024-05-15 01:09:39.437312] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.236 [2024-05-15 01:09:39.437339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.236 qpair failed and we were unable to recover it. 00:22:27.236 [2024-05-15 01:09:39.447183] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.236 [2024-05-15 01:09:39.447342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.236 [2024-05-15 01:09:39.447373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.236 [2024-05-15 01:09:39.447388] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.236 [2024-05-15 01:09:39.447400] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.236 [2024-05-15 01:09:39.447429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.236 qpair failed and we were unable to recover it. 00:22:27.236 [2024-05-15 01:09:39.457215] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.236 [2024-05-15 01:09:39.457409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.236 [2024-05-15 01:09:39.457434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.236 [2024-05-15 01:09:39.457449] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.236 [2024-05-15 01:09:39.457461] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.236 [2024-05-15 01:09:39.457488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.236 qpair failed and we were unable to recover it. 00:22:27.236 [2024-05-15 01:09:39.467243] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.236 [2024-05-15 01:09:39.467404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.236 [2024-05-15 01:09:39.467428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.237 [2024-05-15 01:09:39.467442] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.237 [2024-05-15 01:09:39.467454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.237 [2024-05-15 01:09:39.467482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.237 qpair failed and we were unable to recover it. 00:22:27.237 [2024-05-15 01:09:39.477231] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.237 [2024-05-15 01:09:39.477399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.237 [2024-05-15 01:09:39.477423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.237 [2024-05-15 01:09:39.477438] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.237 [2024-05-15 01:09:39.477451] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.237 [2024-05-15 01:09:39.477478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.237 qpair failed and we were unable to recover it. 00:22:27.237 [2024-05-15 01:09:39.487282] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.237 [2024-05-15 01:09:39.487489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.237 [2024-05-15 01:09:39.487514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.237 [2024-05-15 01:09:39.487529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.237 [2024-05-15 01:09:39.487547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.237 [2024-05-15 01:09:39.487575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.237 qpair failed and we were unable to recover it. 00:22:27.237 [2024-05-15 01:09:39.497313] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.237 [2024-05-15 01:09:39.497480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.237 [2024-05-15 01:09:39.497505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.237 [2024-05-15 01:09:39.497520] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.237 [2024-05-15 01:09:39.497533] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.237 [2024-05-15 01:09:39.497560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.237 qpair failed and we were unable to recover it. 00:22:27.237 [2024-05-15 01:09:39.507301] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.237 [2024-05-15 01:09:39.507460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.237 [2024-05-15 01:09:39.507485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.237 [2024-05-15 01:09:39.507500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.237 [2024-05-15 01:09:39.507512] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.237 [2024-05-15 01:09:39.507540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.237 qpair failed and we were unable to recover it. 00:22:27.237 [2024-05-15 01:09:39.517329] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.237 [2024-05-15 01:09:39.517490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.237 [2024-05-15 01:09:39.517515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.237 [2024-05-15 01:09:39.517530] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.237 [2024-05-15 01:09:39.517542] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.237 [2024-05-15 01:09:39.517569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.237 qpair failed and we were unable to recover it. 00:22:27.237 [2024-05-15 01:09:39.527443] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.237 [2024-05-15 01:09:39.527627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.237 [2024-05-15 01:09:39.527652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.237 [2024-05-15 01:09:39.527667] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.237 [2024-05-15 01:09:39.527679] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.237 [2024-05-15 01:09:39.527706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.237 qpair failed and we were unable to recover it. 00:22:27.237 [2024-05-15 01:09:39.537404] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.237 [2024-05-15 01:09:39.537570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.237 [2024-05-15 01:09:39.537595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.237 [2024-05-15 01:09:39.537609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.237 [2024-05-15 01:09:39.537621] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.237 [2024-05-15 01:09:39.537648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.237 qpair failed and we were unable to recover it. 00:22:27.237 [2024-05-15 01:09:39.547444] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.237 [2024-05-15 01:09:39.547608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.237 [2024-05-15 01:09:39.547633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.237 [2024-05-15 01:09:39.547647] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.237 [2024-05-15 01:09:39.547660] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.237 [2024-05-15 01:09:39.547687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.237 qpair failed and we were unable to recover it. 00:22:27.237 [2024-05-15 01:09:39.557445] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.237 [2024-05-15 01:09:39.557608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.237 [2024-05-15 01:09:39.557633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.237 [2024-05-15 01:09:39.557647] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.237 [2024-05-15 01:09:39.557659] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.237 [2024-05-15 01:09:39.557687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.237 qpair failed and we were unable to recover it. 00:22:27.237 [2024-05-15 01:09:39.567543] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.237 [2024-05-15 01:09:39.567740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.237 [2024-05-15 01:09:39.567765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.237 [2024-05-15 01:09:39.567780] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.237 [2024-05-15 01:09:39.567792] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.237 [2024-05-15 01:09:39.567819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.237 qpair failed and we were unable to recover it. 00:22:27.237 [2024-05-15 01:09:39.577559] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.237 [2024-05-15 01:09:39.577770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.237 [2024-05-15 01:09:39.577795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.237 [2024-05-15 01:09:39.577810] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.237 [2024-05-15 01:09:39.577828] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.237 [2024-05-15 01:09:39.577856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.237 qpair failed and we were unable to recover it. 00:22:27.237 [2024-05-15 01:09:39.587588] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.237 [2024-05-15 01:09:39.587756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.237 [2024-05-15 01:09:39.587782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.237 [2024-05-15 01:09:39.587796] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.237 [2024-05-15 01:09:39.587808] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.237 [2024-05-15 01:09:39.587836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.237 qpair failed and we were unable to recover it. 00:22:27.237 [2024-05-15 01:09:39.597595] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.237 [2024-05-15 01:09:39.597751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.237 [2024-05-15 01:09:39.597777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.237 [2024-05-15 01:09:39.597792] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.238 [2024-05-15 01:09:39.597804] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.238 [2024-05-15 01:09:39.597831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.238 qpair failed and we were unable to recover it. 00:22:27.238 [2024-05-15 01:09:39.607599] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.238 [2024-05-15 01:09:39.607793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.238 [2024-05-15 01:09:39.607819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.238 [2024-05-15 01:09:39.607834] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.238 [2024-05-15 01:09:39.607847] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.238 [2024-05-15 01:09:39.607875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.238 qpair failed and we were unable to recover it. 00:22:27.238 [2024-05-15 01:09:39.617635] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.238 [2024-05-15 01:09:39.617847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.238 [2024-05-15 01:09:39.617872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.238 [2024-05-15 01:09:39.617886] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.238 [2024-05-15 01:09:39.617898] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.238 [2024-05-15 01:09:39.617926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.238 qpair failed and we were unable to recover it. 00:22:27.238 [2024-05-15 01:09:39.627668] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.238 [2024-05-15 01:09:39.627852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.238 [2024-05-15 01:09:39.627879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.238 [2024-05-15 01:09:39.627894] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.238 [2024-05-15 01:09:39.627906] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.238 [2024-05-15 01:09:39.627940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.238 qpair failed and we were unable to recover it. 00:22:27.497 [2024-05-15 01:09:39.637704] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.497 [2024-05-15 01:09:39.637898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.497 [2024-05-15 01:09:39.637924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.497 [2024-05-15 01:09:39.637947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.497 [2024-05-15 01:09:39.637960] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.497 [2024-05-15 01:09:39.637989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.497 qpair failed and we were unable to recover it. 00:22:27.497 [2024-05-15 01:09:39.647753] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.497 [2024-05-15 01:09:39.647951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.497 [2024-05-15 01:09:39.647977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.497 [2024-05-15 01:09:39.647991] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.497 [2024-05-15 01:09:39.648004] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.497 [2024-05-15 01:09:39.648032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.497 qpair failed and we were unable to recover it. 00:22:27.497 [2024-05-15 01:09:39.657806] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.497 [2024-05-15 01:09:39.658001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.497 [2024-05-15 01:09:39.658026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.497 [2024-05-15 01:09:39.658041] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.497 [2024-05-15 01:09:39.658053] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.497 [2024-05-15 01:09:39.658082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.497 qpair failed and we were unable to recover it. 00:22:27.497 [2024-05-15 01:09:39.667807] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.497 [2024-05-15 01:09:39.667981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.497 [2024-05-15 01:09:39.668007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.497 [2024-05-15 01:09:39.668027] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.497 [2024-05-15 01:09:39.668040] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.497 [2024-05-15 01:09:39.668069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.497 qpair failed and we were unable to recover it. 00:22:27.497 [2024-05-15 01:09:39.677836] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.497 [2024-05-15 01:09:39.678024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.497 [2024-05-15 01:09:39.678051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.497 [2024-05-15 01:09:39.678066] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.497 [2024-05-15 01:09:39.678078] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.497 [2024-05-15 01:09:39.678106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.497 qpair failed and we were unable to recover it. 00:22:27.497 [2024-05-15 01:09:39.687960] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.497 [2024-05-15 01:09:39.688163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.497 [2024-05-15 01:09:39.688188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.497 [2024-05-15 01:09:39.688203] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.497 [2024-05-15 01:09:39.688216] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.497 [2024-05-15 01:09:39.688244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.497 qpair failed and we were unable to recover it. 00:22:27.497 [2024-05-15 01:09:39.697976] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.497 [2024-05-15 01:09:39.698177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.497 [2024-05-15 01:09:39.698202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.497 [2024-05-15 01:09:39.698216] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.497 [2024-05-15 01:09:39.698228] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.497 [2024-05-15 01:09:39.698258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.497 qpair failed and we were unable to recover it. 00:22:27.497 [2024-05-15 01:09:39.707969] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.497 [2024-05-15 01:09:39.708131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.497 [2024-05-15 01:09:39.708157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.497 [2024-05-15 01:09:39.708172] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.497 [2024-05-15 01:09:39.708184] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.497 [2024-05-15 01:09:39.708211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.497 qpair failed and we were unable to recover it. 00:22:27.497 [2024-05-15 01:09:39.717982] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.497 [2024-05-15 01:09:39.718145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.497 [2024-05-15 01:09:39.718170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.497 [2024-05-15 01:09:39.718185] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.497 [2024-05-15 01:09:39.718197] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.497 [2024-05-15 01:09:39.718225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.497 qpair failed and we were unable to recover it. 00:22:27.497 [2024-05-15 01:09:39.727953] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.497 [2024-05-15 01:09:39.728112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.497 [2024-05-15 01:09:39.728137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.497 [2024-05-15 01:09:39.728151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.498 [2024-05-15 01:09:39.728164] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.498 [2024-05-15 01:09:39.728191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.498 qpair failed and we were unable to recover it. 00:22:27.498 [2024-05-15 01:09:39.738015] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.498 [2024-05-15 01:09:39.738205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.498 [2024-05-15 01:09:39.738230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.498 [2024-05-15 01:09:39.738244] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.498 [2024-05-15 01:09:39.738256] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.498 [2024-05-15 01:09:39.738284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.498 qpair failed and we were unable to recover it. 00:22:27.498 [2024-05-15 01:09:39.748053] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.498 [2024-05-15 01:09:39.748216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.498 [2024-05-15 01:09:39.748242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.498 [2024-05-15 01:09:39.748256] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.498 [2024-05-15 01:09:39.748269] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.498 [2024-05-15 01:09:39.748296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.498 qpair failed and we were unable to recover it. 00:22:27.498 [2024-05-15 01:09:39.758099] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.498 [2024-05-15 01:09:39.758294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.498 [2024-05-15 01:09:39.758319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.498 [2024-05-15 01:09:39.758339] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.498 [2024-05-15 01:09:39.758352] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.498 [2024-05-15 01:09:39.758380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.498 qpair failed and we were unable to recover it. 00:22:27.498 [2024-05-15 01:09:39.768095] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.498 [2024-05-15 01:09:39.768300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.498 [2024-05-15 01:09:39.768328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.498 [2024-05-15 01:09:39.768344] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.498 [2024-05-15 01:09:39.768356] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.498 [2024-05-15 01:09:39.768384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.498 qpair failed and we were unable to recover it. 00:22:27.498 [2024-05-15 01:09:39.778104] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.498 [2024-05-15 01:09:39.778292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.498 [2024-05-15 01:09:39.778317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.498 [2024-05-15 01:09:39.778332] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.498 [2024-05-15 01:09:39.778345] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.498 [2024-05-15 01:09:39.778373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.498 qpair failed and we were unable to recover it. 00:22:27.498 [2024-05-15 01:09:39.788138] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.498 [2024-05-15 01:09:39.788299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.498 [2024-05-15 01:09:39.788325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.498 [2024-05-15 01:09:39.788339] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.498 [2024-05-15 01:09:39.788352] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.498 [2024-05-15 01:09:39.788379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.498 qpair failed and we were unable to recover it. 00:22:27.498 [2024-05-15 01:09:39.798182] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.498 [2024-05-15 01:09:39.798342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.498 [2024-05-15 01:09:39.798368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.498 [2024-05-15 01:09:39.798383] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.498 [2024-05-15 01:09:39.798395] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.498 [2024-05-15 01:09:39.798422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.498 qpair failed and we were unable to recover it. 00:22:27.498 [2024-05-15 01:09:39.808255] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.498 [2024-05-15 01:09:39.808427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.498 [2024-05-15 01:09:39.808452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.498 [2024-05-15 01:09:39.808467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.498 [2024-05-15 01:09:39.808480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.498 [2024-05-15 01:09:39.808508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.498 qpair failed and we were unable to recover it. 00:22:27.498 [2024-05-15 01:09:39.818216] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.498 [2024-05-15 01:09:39.818377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.498 [2024-05-15 01:09:39.818402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.498 [2024-05-15 01:09:39.818416] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.498 [2024-05-15 01:09:39.818428] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.498 [2024-05-15 01:09:39.818456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.498 qpair failed and we were unable to recover it. 00:22:27.498 [2024-05-15 01:09:39.828277] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.498 [2024-05-15 01:09:39.828440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.498 [2024-05-15 01:09:39.828464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.498 [2024-05-15 01:09:39.828479] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.498 [2024-05-15 01:09:39.828491] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.498 [2024-05-15 01:09:39.828519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.498 qpair failed and we were unable to recover it. 00:22:27.498 [2024-05-15 01:09:39.838310] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.498 [2024-05-15 01:09:39.838510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.498 [2024-05-15 01:09:39.838535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.498 [2024-05-15 01:09:39.838549] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.498 [2024-05-15 01:09:39.838561] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.498 [2024-05-15 01:09:39.838589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.498 qpair failed and we were unable to recover it. 00:22:27.498 [2024-05-15 01:09:39.848298] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.498 [2024-05-15 01:09:39.848464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.498 [2024-05-15 01:09:39.848490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.498 [2024-05-15 01:09:39.848512] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.498 [2024-05-15 01:09:39.848525] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.498 [2024-05-15 01:09:39.848553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.498 qpair failed and we were unable to recover it. 00:22:27.498 [2024-05-15 01:09:39.858329] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.498 [2024-05-15 01:09:39.858543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.498 [2024-05-15 01:09:39.858568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.498 [2024-05-15 01:09:39.858583] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.498 [2024-05-15 01:09:39.858595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.498 [2024-05-15 01:09:39.858623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.498 qpair failed and we were unable to recover it. 00:22:27.498 [2024-05-15 01:09:39.868398] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.498 [2024-05-15 01:09:39.868562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.498 [2024-05-15 01:09:39.868586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.499 [2024-05-15 01:09:39.868600] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.499 [2024-05-15 01:09:39.868613] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.499 [2024-05-15 01:09:39.868640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.499 qpair failed and we were unable to recover it. 00:22:27.499 [2024-05-15 01:09:39.878426] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.499 [2024-05-15 01:09:39.878605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.499 [2024-05-15 01:09:39.878630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.499 [2024-05-15 01:09:39.878645] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.499 [2024-05-15 01:09:39.878657] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.499 [2024-05-15 01:09:39.878685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.499 qpair failed and we were unable to recover it. 00:22:27.499 [2024-05-15 01:09:39.888469] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.499 [2024-05-15 01:09:39.888632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.499 [2024-05-15 01:09:39.888658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.499 [2024-05-15 01:09:39.888673] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.499 [2024-05-15 01:09:39.888685] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.499 [2024-05-15 01:09:39.888713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.499 qpair failed and we were unable to recover it. 00:22:27.758 [2024-05-15 01:09:39.898461] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.758 [2024-05-15 01:09:39.898628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.758 [2024-05-15 01:09:39.898653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.758 [2024-05-15 01:09:39.898669] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.758 [2024-05-15 01:09:39.898681] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.758 [2024-05-15 01:09:39.898708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.758 qpair failed and we were unable to recover it. 00:22:27.758 [2024-05-15 01:09:39.908550] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.759 [2024-05-15 01:09:39.908710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.759 [2024-05-15 01:09:39.908736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.759 [2024-05-15 01:09:39.908750] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.759 [2024-05-15 01:09:39.908762] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.759 [2024-05-15 01:09:39.908791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.759 qpair failed and we were unable to recover it. 00:22:27.759 [2024-05-15 01:09:39.918538] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.759 [2024-05-15 01:09:39.918701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.759 [2024-05-15 01:09:39.918736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.759 [2024-05-15 01:09:39.918751] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.759 [2024-05-15 01:09:39.918763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.759 [2024-05-15 01:09:39.918791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.759 qpair failed and we were unable to recover it. 00:22:27.759 [2024-05-15 01:09:39.928561] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.759 [2024-05-15 01:09:39.928720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.759 [2024-05-15 01:09:39.928746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.759 [2024-05-15 01:09:39.928760] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.759 [2024-05-15 01:09:39.928775] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.759 [2024-05-15 01:09:39.928804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.759 qpair failed and we were unable to recover it. 00:22:27.759 [2024-05-15 01:09:39.938566] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.759 [2024-05-15 01:09:39.938728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.759 [2024-05-15 01:09:39.938759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.759 [2024-05-15 01:09:39.938774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.759 [2024-05-15 01:09:39.938786] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.759 [2024-05-15 01:09:39.938814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.759 qpair failed and we were unable to recover it. 00:22:27.759 [2024-05-15 01:09:39.948631] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.759 [2024-05-15 01:09:39.948788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.759 [2024-05-15 01:09:39.948814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.759 [2024-05-15 01:09:39.948828] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.759 [2024-05-15 01:09:39.948840] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.759 [2024-05-15 01:09:39.948869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.759 qpair failed and we were unable to recover it. 00:22:27.759 [2024-05-15 01:09:39.958610] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.759 [2024-05-15 01:09:39.958781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.759 [2024-05-15 01:09:39.958806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.759 [2024-05-15 01:09:39.958821] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.759 [2024-05-15 01:09:39.958833] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.759 [2024-05-15 01:09:39.958862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.759 qpair failed and we were unable to recover it. 00:22:27.759 [2024-05-15 01:09:39.968661] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.759 [2024-05-15 01:09:39.968857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.759 [2024-05-15 01:09:39.968884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.759 [2024-05-15 01:09:39.968899] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.759 [2024-05-15 01:09:39.968912] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.759 [2024-05-15 01:09:39.968949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.759 qpair failed and we were unable to recover it. 00:22:27.759 [2024-05-15 01:09:39.978707] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.759 [2024-05-15 01:09:39.978878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.759 [2024-05-15 01:09:39.978904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.759 [2024-05-15 01:09:39.978923] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.759 [2024-05-15 01:09:39.978966] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.759 [2024-05-15 01:09:39.978998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.759 qpair failed and we were unable to recover it. 00:22:27.759 [2024-05-15 01:09:39.988741] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.759 [2024-05-15 01:09:39.988910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.759 [2024-05-15 01:09:39.988946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.759 [2024-05-15 01:09:39.988963] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.759 [2024-05-15 01:09:39.988975] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.759 [2024-05-15 01:09:39.989004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.759 qpair failed and we were unable to recover it. 00:22:27.759 [2024-05-15 01:09:39.998749] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.759 [2024-05-15 01:09:39.998911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.759 [2024-05-15 01:09:39.998943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.759 [2024-05-15 01:09:39.998959] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.759 [2024-05-15 01:09:39.998972] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.759 [2024-05-15 01:09:39.999001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.759 qpair failed and we were unable to recover it. 00:22:27.759 [2024-05-15 01:09:40.008805] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.759 [2024-05-15 01:09:40.008991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.759 [2024-05-15 01:09:40.009020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.759 [2024-05-15 01:09:40.009036] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.759 [2024-05-15 01:09:40.009049] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.759 [2024-05-15 01:09:40.009080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.759 qpair failed and we were unable to recover it. 00:22:27.759 [2024-05-15 01:09:40.018815] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.759 [2024-05-15 01:09:40.018989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.759 [2024-05-15 01:09:40.019015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.759 [2024-05-15 01:09:40.019031] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.759 [2024-05-15 01:09:40.019043] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.759 [2024-05-15 01:09:40.019072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.759 qpair failed and we were unable to recover it. 00:22:27.759 [2024-05-15 01:09:40.028867] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.759 [2024-05-15 01:09:40.029073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.759 [2024-05-15 01:09:40.029105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.759 [2024-05-15 01:09:40.029121] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.760 [2024-05-15 01:09:40.029134] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.760 [2024-05-15 01:09:40.029162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.760 qpair failed and we were unable to recover it. 00:22:27.760 [2024-05-15 01:09:40.038890] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.760 [2024-05-15 01:09:40.039120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.760 [2024-05-15 01:09:40.039148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.760 [2024-05-15 01:09:40.039164] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.760 [2024-05-15 01:09:40.039176] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.760 [2024-05-15 01:09:40.039211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.760 qpair failed and we were unable to recover it. 00:22:27.760 [2024-05-15 01:09:40.048922] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.760 [2024-05-15 01:09:40.049134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.760 [2024-05-15 01:09:40.049161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.760 [2024-05-15 01:09:40.049176] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.760 [2024-05-15 01:09:40.049188] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.760 [2024-05-15 01:09:40.049218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.760 qpair failed and we were unable to recover it. 00:22:27.760 [2024-05-15 01:09:40.058928] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.760 [2024-05-15 01:09:40.059142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.760 [2024-05-15 01:09:40.059168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.760 [2024-05-15 01:09:40.059182] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.760 [2024-05-15 01:09:40.059205] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.760 [2024-05-15 01:09:40.059233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.760 qpair failed and we were unable to recover it. 00:22:27.760 [2024-05-15 01:09:40.068947] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.760 [2024-05-15 01:09:40.069159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.760 [2024-05-15 01:09:40.069192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.760 [2024-05-15 01:09:40.069207] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.760 [2024-05-15 01:09:40.069220] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.760 [2024-05-15 01:09:40.069256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.760 qpair failed and we were unable to recover it. 00:22:27.760 [2024-05-15 01:09:40.078971] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.760 [2024-05-15 01:09:40.079133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.760 [2024-05-15 01:09:40.079159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.760 [2024-05-15 01:09:40.079174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.760 [2024-05-15 01:09:40.079186] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.760 [2024-05-15 01:09:40.079215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.760 qpair failed and we were unable to recover it. 00:22:27.760 [2024-05-15 01:09:40.089009] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.760 [2024-05-15 01:09:40.089193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.760 [2024-05-15 01:09:40.089219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.760 [2024-05-15 01:09:40.089234] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.760 [2024-05-15 01:09:40.089247] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.760 [2024-05-15 01:09:40.089276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.760 qpair failed and we were unable to recover it. 00:22:27.760 [2024-05-15 01:09:40.099033] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.760 [2024-05-15 01:09:40.099237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.760 [2024-05-15 01:09:40.099262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.760 [2024-05-15 01:09:40.099278] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.760 [2024-05-15 01:09:40.099291] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.760 [2024-05-15 01:09:40.099319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.760 qpair failed and we were unable to recover it. 00:22:27.760 [2024-05-15 01:09:40.109086] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.760 [2024-05-15 01:09:40.109250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.760 [2024-05-15 01:09:40.109276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.760 [2024-05-15 01:09:40.109291] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.760 [2024-05-15 01:09:40.109303] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.760 [2024-05-15 01:09:40.109332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.760 qpair failed and we were unable to recover it. 00:22:27.760 [2024-05-15 01:09:40.119087] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.760 [2024-05-15 01:09:40.119253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.760 [2024-05-15 01:09:40.119284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.760 [2024-05-15 01:09:40.119299] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.760 [2024-05-15 01:09:40.119312] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.760 [2024-05-15 01:09:40.119340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.760 qpair failed and we were unable to recover it. 00:22:27.760 [2024-05-15 01:09:40.129100] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.760 [2024-05-15 01:09:40.129261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.760 [2024-05-15 01:09:40.129286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.760 [2024-05-15 01:09:40.129301] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.760 [2024-05-15 01:09:40.129313] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.760 [2024-05-15 01:09:40.129341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.760 qpair failed and we were unable to recover it. 00:22:27.760 [2024-05-15 01:09:40.139219] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.760 [2024-05-15 01:09:40.139425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.760 [2024-05-15 01:09:40.139452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.760 [2024-05-15 01:09:40.139467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.760 [2024-05-15 01:09:40.139488] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.760 [2024-05-15 01:09:40.139520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.760 qpair failed and we were unable to recover it. 00:22:27.760 [2024-05-15 01:09:40.149165] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:27.760 [2024-05-15 01:09:40.149329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:27.760 [2024-05-15 01:09:40.149355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:27.760 [2024-05-15 01:09:40.149369] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:27.761 [2024-05-15 01:09:40.149382] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:27.761 [2024-05-15 01:09:40.149411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:27.761 qpair failed and we were unable to recover it. 00:22:28.020 [2024-05-15 01:09:40.159203] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.020 [2024-05-15 01:09:40.159359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.020 [2024-05-15 01:09:40.159385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.020 [2024-05-15 01:09:40.159399] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.020 [2024-05-15 01:09:40.159412] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.020 [2024-05-15 01:09:40.159445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.020 qpair failed and we were unable to recover it. 00:22:28.020 [2024-05-15 01:09:40.169254] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.020 [2024-05-15 01:09:40.169426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.020 [2024-05-15 01:09:40.169453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.020 [2024-05-15 01:09:40.169468] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.020 [2024-05-15 01:09:40.169480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.020 [2024-05-15 01:09:40.169508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.020 qpair failed and we were unable to recover it. 00:22:28.020 [2024-05-15 01:09:40.179327] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.020 [2024-05-15 01:09:40.179496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.020 [2024-05-15 01:09:40.179523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.020 [2024-05-15 01:09:40.179537] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.020 [2024-05-15 01:09:40.179549] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.020 [2024-05-15 01:09:40.179577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.020 qpair failed and we were unable to recover it. 00:22:28.020 [2024-05-15 01:09:40.189283] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.020 [2024-05-15 01:09:40.189447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.020 [2024-05-15 01:09:40.189473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.020 [2024-05-15 01:09:40.189487] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.020 [2024-05-15 01:09:40.189500] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.020 [2024-05-15 01:09:40.189528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.020 qpair failed and we were unable to recover it. 00:22:28.020 [2024-05-15 01:09:40.199312] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.020 [2024-05-15 01:09:40.199470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.020 [2024-05-15 01:09:40.199496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.020 [2024-05-15 01:09:40.199511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.020 [2024-05-15 01:09:40.199524] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.020 [2024-05-15 01:09:40.199552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.020 qpair failed and we were unable to recover it. 00:22:28.020 [2024-05-15 01:09:40.209319] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.020 [2024-05-15 01:09:40.209480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.020 [2024-05-15 01:09:40.209511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.020 [2024-05-15 01:09:40.209527] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.020 [2024-05-15 01:09:40.209539] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.020 [2024-05-15 01:09:40.209567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.020 qpair failed and we were unable to recover it. 00:22:28.020 [2024-05-15 01:09:40.219395] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.020 [2024-05-15 01:09:40.219586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.020 [2024-05-15 01:09:40.219610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.020 [2024-05-15 01:09:40.219625] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.020 [2024-05-15 01:09:40.219638] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.020 [2024-05-15 01:09:40.219666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.020 qpair failed and we were unable to recover it. 00:22:28.020 [2024-05-15 01:09:40.229410] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.020 [2024-05-15 01:09:40.229574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.020 [2024-05-15 01:09:40.229600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.020 [2024-05-15 01:09:40.229614] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.020 [2024-05-15 01:09:40.229627] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.020 [2024-05-15 01:09:40.229654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.020 qpair failed and we were unable to recover it. 00:22:28.020 [2024-05-15 01:09:40.239424] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.021 [2024-05-15 01:09:40.239582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.021 [2024-05-15 01:09:40.239607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.021 [2024-05-15 01:09:40.239622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.021 [2024-05-15 01:09:40.239634] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.021 [2024-05-15 01:09:40.239662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.021 qpair failed and we were unable to recover it. 00:22:28.021 [2024-05-15 01:09:40.249451] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.021 [2024-05-15 01:09:40.249604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.021 [2024-05-15 01:09:40.249629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.021 [2024-05-15 01:09:40.249644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.021 [2024-05-15 01:09:40.249662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.021 [2024-05-15 01:09:40.249691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.021 qpair failed and we were unable to recover it. 00:22:28.021 [2024-05-15 01:09:40.259522] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.021 [2024-05-15 01:09:40.259729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.021 [2024-05-15 01:09:40.259755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.021 [2024-05-15 01:09:40.259774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.021 [2024-05-15 01:09:40.259787] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.021 [2024-05-15 01:09:40.259816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.021 qpair failed and we were unable to recover it. 00:22:28.021 [2024-05-15 01:09:40.269528] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.021 [2024-05-15 01:09:40.269735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.021 [2024-05-15 01:09:40.269761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.021 [2024-05-15 01:09:40.269776] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.021 [2024-05-15 01:09:40.269788] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.021 [2024-05-15 01:09:40.269816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.021 qpair failed and we were unable to recover it. 00:22:28.021 [2024-05-15 01:09:40.279541] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.021 [2024-05-15 01:09:40.279706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.021 [2024-05-15 01:09:40.279731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.021 [2024-05-15 01:09:40.279746] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.021 [2024-05-15 01:09:40.279758] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.021 [2024-05-15 01:09:40.279786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.021 qpair failed and we were unable to recover it. 00:22:28.021 [2024-05-15 01:09:40.289611] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.021 [2024-05-15 01:09:40.289818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.021 [2024-05-15 01:09:40.289845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.021 [2024-05-15 01:09:40.289860] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.021 [2024-05-15 01:09:40.289872] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.021 [2024-05-15 01:09:40.289900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.021 qpair failed and we were unable to recover it. 00:22:28.021 [2024-05-15 01:09:40.299621] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.021 [2024-05-15 01:09:40.299794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.021 [2024-05-15 01:09:40.299820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.021 [2024-05-15 01:09:40.299834] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.021 [2024-05-15 01:09:40.299846] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.021 [2024-05-15 01:09:40.299874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.021 qpair failed and we were unable to recover it. 00:22:28.021 [2024-05-15 01:09:40.309638] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.021 [2024-05-15 01:09:40.309819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.021 [2024-05-15 01:09:40.309845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.021 [2024-05-15 01:09:40.309859] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.021 [2024-05-15 01:09:40.309872] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.021 [2024-05-15 01:09:40.309899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.021 qpair failed and we were unable to recover it. 00:22:28.021 [2024-05-15 01:09:40.319689] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.021 [2024-05-15 01:09:40.319849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.021 [2024-05-15 01:09:40.319873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.021 [2024-05-15 01:09:40.319889] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.021 [2024-05-15 01:09:40.319901] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.021 [2024-05-15 01:09:40.319937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.021 qpair failed and we were unable to recover it. 00:22:28.021 [2024-05-15 01:09:40.329723] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.021 [2024-05-15 01:09:40.329888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.021 [2024-05-15 01:09:40.329913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.021 [2024-05-15 01:09:40.329928] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.021 [2024-05-15 01:09:40.329947] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.021 [2024-05-15 01:09:40.329975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.021 qpair failed and we were unable to recover it. 00:22:28.021 [2024-05-15 01:09:40.339744] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.021 [2024-05-15 01:09:40.339953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.021 [2024-05-15 01:09:40.339977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.021 [2024-05-15 01:09:40.339992] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.021 [2024-05-15 01:09:40.340010] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.021 [2024-05-15 01:09:40.340038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.021 qpair failed and we were unable to recover it. 00:22:28.021 [2024-05-15 01:09:40.349754] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.021 [2024-05-15 01:09:40.349917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.021 [2024-05-15 01:09:40.349948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.021 [2024-05-15 01:09:40.349964] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.021 [2024-05-15 01:09:40.349976] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.021 [2024-05-15 01:09:40.350004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.021 qpair failed and we were unable to recover it. 00:22:28.021 [2024-05-15 01:09:40.359787] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.021 [2024-05-15 01:09:40.359965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.021 [2024-05-15 01:09:40.359990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.021 [2024-05-15 01:09:40.360005] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.021 [2024-05-15 01:09:40.360017] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.021 [2024-05-15 01:09:40.360045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.021 qpair failed and we were unable to recover it. 00:22:28.021 [2024-05-15 01:09:40.369814] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.021 [2024-05-15 01:09:40.369990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.021 [2024-05-15 01:09:40.370016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.021 [2024-05-15 01:09:40.370030] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.021 [2024-05-15 01:09:40.370043] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.021 [2024-05-15 01:09:40.370071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.021 qpair failed and we were unable to recover it. 00:22:28.022 [2024-05-15 01:09:40.379858] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.022 [2024-05-15 01:09:40.380075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.022 [2024-05-15 01:09:40.380101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.022 [2024-05-15 01:09:40.380116] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.022 [2024-05-15 01:09:40.380129] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.022 [2024-05-15 01:09:40.380157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.022 qpair failed and we were unable to recover it. 00:22:28.022 [2024-05-15 01:09:40.389891] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.022 [2024-05-15 01:09:40.390067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.022 [2024-05-15 01:09:40.390093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.022 [2024-05-15 01:09:40.390111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.022 [2024-05-15 01:09:40.390124] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.022 [2024-05-15 01:09:40.390152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.022 qpair failed and we were unable to recover it. 00:22:28.022 [2024-05-15 01:09:40.399903] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.022 [2024-05-15 01:09:40.400072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.022 [2024-05-15 01:09:40.400097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.022 [2024-05-15 01:09:40.400113] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.022 [2024-05-15 01:09:40.400125] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.022 [2024-05-15 01:09:40.400152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.022 qpair failed and we were unable to recover it. 00:22:28.022 [2024-05-15 01:09:40.409960] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.022 [2024-05-15 01:09:40.410125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.022 [2024-05-15 01:09:40.410151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.022 [2024-05-15 01:09:40.410166] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.022 [2024-05-15 01:09:40.410178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.022 [2024-05-15 01:09:40.410206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.022 qpair failed and we were unable to recover it. 00:22:28.281 [2024-05-15 01:09:40.419977] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.281 [2024-05-15 01:09:40.420143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.281 [2024-05-15 01:09:40.420169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.281 [2024-05-15 01:09:40.420185] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.281 [2024-05-15 01:09:40.420197] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.281 [2024-05-15 01:09:40.420226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.281 qpair failed and we were unable to recover it. 00:22:28.281 [2024-05-15 01:09:40.429985] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.281 [2024-05-15 01:09:40.430147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.281 [2024-05-15 01:09:40.430173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.281 [2024-05-15 01:09:40.430188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.281 [2024-05-15 01:09:40.430206] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.281 [2024-05-15 01:09:40.430236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.281 qpair failed and we were unable to recover it. 00:22:28.281 [2024-05-15 01:09:40.440039] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.281 [2024-05-15 01:09:40.440196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.281 [2024-05-15 01:09:40.440221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.281 [2024-05-15 01:09:40.440236] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.281 [2024-05-15 01:09:40.440248] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.281 [2024-05-15 01:09:40.440276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.281 qpair failed and we were unable to recover it. 00:22:28.281 [2024-05-15 01:09:40.450044] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.281 [2024-05-15 01:09:40.450209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.281 [2024-05-15 01:09:40.450234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.281 [2024-05-15 01:09:40.450249] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.281 [2024-05-15 01:09:40.450261] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.281 [2024-05-15 01:09:40.450288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.281 qpair failed and we were unable to recover it. 00:22:28.281 [2024-05-15 01:09:40.460080] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.281 [2024-05-15 01:09:40.460245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.281 [2024-05-15 01:09:40.460270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.281 [2024-05-15 01:09:40.460285] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.281 [2024-05-15 01:09:40.460297] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.281 [2024-05-15 01:09:40.460325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.281 qpair failed and we were unable to recover it. 00:22:28.281 [2024-05-15 01:09:40.470093] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.281 [2024-05-15 01:09:40.470257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.281 [2024-05-15 01:09:40.470281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.281 [2024-05-15 01:09:40.470295] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.281 [2024-05-15 01:09:40.470307] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.281 [2024-05-15 01:09:40.470335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.281 qpair failed and we were unable to recover it. 00:22:28.281 [2024-05-15 01:09:40.480174] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.281 [2024-05-15 01:09:40.480347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.282 [2024-05-15 01:09:40.480373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.282 [2024-05-15 01:09:40.480392] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.282 [2024-05-15 01:09:40.480405] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.282 [2024-05-15 01:09:40.480434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.282 qpair failed and we were unable to recover it. 00:22:28.282 [2024-05-15 01:09:40.490182] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.282 [2024-05-15 01:09:40.490387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.282 [2024-05-15 01:09:40.490413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.282 [2024-05-15 01:09:40.490428] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.282 [2024-05-15 01:09:40.490440] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.282 [2024-05-15 01:09:40.490468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.282 qpair failed and we were unable to recover it. 00:22:28.282 [2024-05-15 01:09:40.500223] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.282 [2024-05-15 01:09:40.500391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.282 [2024-05-15 01:09:40.500416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.282 [2024-05-15 01:09:40.500432] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.282 [2024-05-15 01:09:40.500444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.282 [2024-05-15 01:09:40.500473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.282 qpair failed and we were unable to recover it. 00:22:28.282 [2024-05-15 01:09:40.510200] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.282 [2024-05-15 01:09:40.510369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.282 [2024-05-15 01:09:40.510395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.282 [2024-05-15 01:09:40.510410] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.282 [2024-05-15 01:09:40.510422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.282 [2024-05-15 01:09:40.510450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.282 qpair failed and we were unable to recover it. 00:22:28.282 [2024-05-15 01:09:40.520232] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.282 [2024-05-15 01:09:40.520392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.282 [2024-05-15 01:09:40.520418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.282 [2024-05-15 01:09:40.520438] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.282 [2024-05-15 01:09:40.520451] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.282 [2024-05-15 01:09:40.520479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.282 qpair failed and we were unable to recover it. 00:22:28.282 [2024-05-15 01:09:40.530249] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.282 [2024-05-15 01:09:40.530418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.282 [2024-05-15 01:09:40.530443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.282 [2024-05-15 01:09:40.530457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.282 [2024-05-15 01:09:40.530469] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.282 [2024-05-15 01:09:40.530496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.282 qpair failed and we were unable to recover it. 00:22:28.282 [2024-05-15 01:09:40.540309] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.282 [2024-05-15 01:09:40.540483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.282 [2024-05-15 01:09:40.540508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.282 [2024-05-15 01:09:40.540522] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.282 [2024-05-15 01:09:40.540535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.282 [2024-05-15 01:09:40.540563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.282 qpair failed and we were unable to recover it. 00:22:28.282 [2024-05-15 01:09:40.550312] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.282 [2024-05-15 01:09:40.550475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.282 [2024-05-15 01:09:40.550500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.282 [2024-05-15 01:09:40.550515] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.282 [2024-05-15 01:09:40.550527] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.282 [2024-05-15 01:09:40.550555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.282 qpair failed and we were unable to recover it. 00:22:28.282 [2024-05-15 01:09:40.560329] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.282 [2024-05-15 01:09:40.560488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.282 [2024-05-15 01:09:40.560513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.282 [2024-05-15 01:09:40.560527] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.282 [2024-05-15 01:09:40.560539] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.282 [2024-05-15 01:09:40.560567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.282 qpair failed and we were unable to recover it. 00:22:28.282 [2024-05-15 01:09:40.570365] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.282 [2024-05-15 01:09:40.570529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.282 [2024-05-15 01:09:40.570554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.282 [2024-05-15 01:09:40.570569] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.282 [2024-05-15 01:09:40.570581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.282 [2024-05-15 01:09:40.570608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.282 qpair failed and we were unable to recover it. 00:22:28.282 [2024-05-15 01:09:40.580411] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.282 [2024-05-15 01:09:40.580578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.282 [2024-05-15 01:09:40.580603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.282 [2024-05-15 01:09:40.580618] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.282 [2024-05-15 01:09:40.580630] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.282 [2024-05-15 01:09:40.580657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.282 qpair failed and we were unable to recover it. 00:22:28.282 [2024-05-15 01:09:40.590404] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.282 [2024-05-15 01:09:40.590578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.282 [2024-05-15 01:09:40.590605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.282 [2024-05-15 01:09:40.590620] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.282 [2024-05-15 01:09:40.590633] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.282 [2024-05-15 01:09:40.590662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.282 qpair failed and we were unable to recover it. 00:22:28.282 [2024-05-15 01:09:40.600492] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.282 [2024-05-15 01:09:40.600647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.282 [2024-05-15 01:09:40.600672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.282 [2024-05-15 01:09:40.600687] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.282 [2024-05-15 01:09:40.600699] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.282 [2024-05-15 01:09:40.600727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.282 qpair failed and we were unable to recover it. 00:22:28.282 [2024-05-15 01:09:40.610507] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.282 [2024-05-15 01:09:40.610666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.282 [2024-05-15 01:09:40.610692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.282 [2024-05-15 01:09:40.610715] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.282 [2024-05-15 01:09:40.610727] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.283 [2024-05-15 01:09:40.610755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.283 qpair failed and we were unable to recover it. 00:22:28.283 [2024-05-15 01:09:40.620525] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.283 [2024-05-15 01:09:40.620689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.283 [2024-05-15 01:09:40.620714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.283 [2024-05-15 01:09:40.620729] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.283 [2024-05-15 01:09:40.620741] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.283 [2024-05-15 01:09:40.620769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.283 qpair failed and we were unable to recover it. 00:22:28.283 [2024-05-15 01:09:40.630546] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.283 [2024-05-15 01:09:40.630705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.283 [2024-05-15 01:09:40.630730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.283 [2024-05-15 01:09:40.630745] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.283 [2024-05-15 01:09:40.630757] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.283 [2024-05-15 01:09:40.630784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.283 qpair failed and we were unable to recover it. 00:22:28.283 [2024-05-15 01:09:40.640579] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.283 [2024-05-15 01:09:40.640733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.283 [2024-05-15 01:09:40.640758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.283 [2024-05-15 01:09:40.640773] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.283 [2024-05-15 01:09:40.640785] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.283 [2024-05-15 01:09:40.640813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.283 qpair failed and we were unable to recover it. 00:22:28.283 [2024-05-15 01:09:40.650614] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.283 [2024-05-15 01:09:40.650774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.283 [2024-05-15 01:09:40.650799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.283 [2024-05-15 01:09:40.650814] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.283 [2024-05-15 01:09:40.650826] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.283 [2024-05-15 01:09:40.650853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.283 qpair failed and we were unable to recover it. 00:22:28.283 [2024-05-15 01:09:40.660644] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.283 [2024-05-15 01:09:40.660852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.283 [2024-05-15 01:09:40.660877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.283 [2024-05-15 01:09:40.660892] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.283 [2024-05-15 01:09:40.660905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.283 [2024-05-15 01:09:40.660944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.283 qpair failed and we were unable to recover it. 00:22:28.283 [2024-05-15 01:09:40.670665] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.283 [2024-05-15 01:09:40.670823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.283 [2024-05-15 01:09:40.670848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.283 [2024-05-15 01:09:40.670863] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.283 [2024-05-15 01:09:40.670875] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.283 [2024-05-15 01:09:40.670903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.283 qpair failed and we were unable to recover it. 00:22:28.545 [2024-05-15 01:09:40.680676] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.545 [2024-05-15 01:09:40.680836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.545 [2024-05-15 01:09:40.680863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.545 [2024-05-15 01:09:40.680877] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.545 [2024-05-15 01:09:40.680890] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.545 [2024-05-15 01:09:40.680917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.545 qpair failed and we were unable to recover it. 00:22:28.545 [2024-05-15 01:09:40.690723] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.545 [2024-05-15 01:09:40.690881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.545 [2024-05-15 01:09:40.690906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.545 [2024-05-15 01:09:40.690921] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.545 [2024-05-15 01:09:40.690941] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.545 [2024-05-15 01:09:40.690970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.545 qpair failed and we were unable to recover it. 00:22:28.545 [2024-05-15 01:09:40.700747] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.545 [2024-05-15 01:09:40.700909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.545 [2024-05-15 01:09:40.700948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.545 [2024-05-15 01:09:40.700966] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.545 [2024-05-15 01:09:40.700978] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.545 [2024-05-15 01:09:40.701005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.545 qpair failed and we were unable to recover it. 00:22:28.545 [2024-05-15 01:09:40.710805] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.545 [2024-05-15 01:09:40.710978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.545 [2024-05-15 01:09:40.711004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.545 [2024-05-15 01:09:40.711019] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.545 [2024-05-15 01:09:40.711031] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.545 [2024-05-15 01:09:40.711059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.545 qpair failed and we were unable to recover it. 00:22:28.545 [2024-05-15 01:09:40.720810] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.545 [2024-05-15 01:09:40.720981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.545 [2024-05-15 01:09:40.721007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.545 [2024-05-15 01:09:40.721021] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.545 [2024-05-15 01:09:40.721034] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.545 [2024-05-15 01:09:40.721061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.545 qpair failed and we were unable to recover it. 00:22:28.545 [2024-05-15 01:09:40.730825] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.545 [2024-05-15 01:09:40.731003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.545 [2024-05-15 01:09:40.731038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.545 [2024-05-15 01:09:40.731053] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.545 [2024-05-15 01:09:40.731065] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.545 [2024-05-15 01:09:40.731093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.545 qpair failed and we were unable to recover it. 00:22:28.545 [2024-05-15 01:09:40.740879] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.545 [2024-05-15 01:09:40.741075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.545 [2024-05-15 01:09:40.741102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.545 [2024-05-15 01:09:40.741121] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.545 [2024-05-15 01:09:40.741134] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.545 [2024-05-15 01:09:40.741163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.545 qpair failed and we were unable to recover it. 00:22:28.545 [2024-05-15 01:09:40.750903] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.545 [2024-05-15 01:09:40.751121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.545 [2024-05-15 01:09:40.751146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.545 [2024-05-15 01:09:40.751161] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.545 [2024-05-15 01:09:40.751174] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.546 [2024-05-15 01:09:40.751201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.546 qpair failed and we were unable to recover it. 00:22:28.546 [2024-05-15 01:09:40.760927] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.546 [2024-05-15 01:09:40.761092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.546 [2024-05-15 01:09:40.761117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.546 [2024-05-15 01:09:40.761132] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.546 [2024-05-15 01:09:40.761144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.546 [2024-05-15 01:09:40.761172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.546 qpair failed and we were unable to recover it. 00:22:28.546 [2024-05-15 01:09:40.770971] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.546 [2024-05-15 01:09:40.771138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.546 [2024-05-15 01:09:40.771163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.546 [2024-05-15 01:09:40.771178] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.546 [2024-05-15 01:09:40.771190] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.546 [2024-05-15 01:09:40.771218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.546 qpair failed and we were unable to recover it. 00:22:28.546 [2024-05-15 01:09:40.780998] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.546 [2024-05-15 01:09:40.781189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.546 [2024-05-15 01:09:40.781214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.546 [2024-05-15 01:09:40.781229] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.546 [2024-05-15 01:09:40.781241] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.546 [2024-05-15 01:09:40.781268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.546 qpair failed and we were unable to recover it. 00:22:28.546 [2024-05-15 01:09:40.791014] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.546 [2024-05-15 01:09:40.791220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.546 [2024-05-15 01:09:40.791251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.546 [2024-05-15 01:09:40.791266] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.546 [2024-05-15 01:09:40.791278] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.546 [2024-05-15 01:09:40.791307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.546 qpair failed and we were unable to recover it. 00:22:28.546 [2024-05-15 01:09:40.801055] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.546 [2024-05-15 01:09:40.801295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.546 [2024-05-15 01:09:40.801320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.546 [2024-05-15 01:09:40.801334] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.546 [2024-05-15 01:09:40.801347] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.546 [2024-05-15 01:09:40.801375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.546 qpair failed and we were unable to recover it. 00:22:28.546 [2024-05-15 01:09:40.811100] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.546 [2024-05-15 01:09:40.811293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.546 [2024-05-15 01:09:40.811320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.546 [2024-05-15 01:09:40.811346] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.546 [2024-05-15 01:09:40.811358] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.546 [2024-05-15 01:09:40.811387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.546 qpair failed and we were unable to recover it. 00:22:28.546 [2024-05-15 01:09:40.821154] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.546 [2024-05-15 01:09:40.821361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.546 [2024-05-15 01:09:40.821388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.546 [2024-05-15 01:09:40.821407] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.546 [2024-05-15 01:09:40.821420] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.546 [2024-05-15 01:09:40.821449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.546 qpair failed and we were unable to recover it. 00:22:28.546 [2024-05-15 01:09:40.831210] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.546 [2024-05-15 01:09:40.831411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.546 [2024-05-15 01:09:40.831436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.546 [2024-05-15 01:09:40.831450] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.546 [2024-05-15 01:09:40.831463] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.546 [2024-05-15 01:09:40.831496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.546 qpair failed and we were unable to recover it. 00:22:28.546 [2024-05-15 01:09:40.841186] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.546 [2024-05-15 01:09:40.841357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.546 [2024-05-15 01:09:40.841383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.546 [2024-05-15 01:09:40.841398] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.546 [2024-05-15 01:09:40.841410] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.546 [2024-05-15 01:09:40.841438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.546 qpair failed and we were unable to recover it. 00:22:28.546 [2024-05-15 01:09:40.851181] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.546 [2024-05-15 01:09:40.851341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.546 [2024-05-15 01:09:40.851366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.546 [2024-05-15 01:09:40.851381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.547 [2024-05-15 01:09:40.851393] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.547 [2024-05-15 01:09:40.851421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.547 qpair failed and we were unable to recover it. 00:22:28.547 [2024-05-15 01:09:40.861254] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.547 [2024-05-15 01:09:40.861489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.547 [2024-05-15 01:09:40.861514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.547 [2024-05-15 01:09:40.861529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.547 [2024-05-15 01:09:40.861541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.547 [2024-05-15 01:09:40.861569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.547 qpair failed and we were unable to recover it. 00:22:28.547 [2024-05-15 01:09:40.871248] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.547 [2024-05-15 01:09:40.871412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.547 [2024-05-15 01:09:40.871437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.547 [2024-05-15 01:09:40.871451] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.547 [2024-05-15 01:09:40.871464] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.547 [2024-05-15 01:09:40.871492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.547 qpair failed and we were unable to recover it. 00:22:28.547 [2024-05-15 01:09:40.881290] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.547 [2024-05-15 01:09:40.881491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.547 [2024-05-15 01:09:40.881521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.547 [2024-05-15 01:09:40.881537] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.547 [2024-05-15 01:09:40.881549] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.547 [2024-05-15 01:09:40.881577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.547 qpair failed and we were unable to recover it. 00:22:28.547 [2024-05-15 01:09:40.891321] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.547 [2024-05-15 01:09:40.891482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.547 [2024-05-15 01:09:40.891507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.547 [2024-05-15 01:09:40.891522] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.547 [2024-05-15 01:09:40.891535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.547 [2024-05-15 01:09:40.891562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.547 qpair failed and we were unable to recover it. 00:22:28.547 [2024-05-15 01:09:40.901382] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.547 [2024-05-15 01:09:40.901579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.547 [2024-05-15 01:09:40.901606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.547 [2024-05-15 01:09:40.901621] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.547 [2024-05-15 01:09:40.901634] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.547 [2024-05-15 01:09:40.901662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.547 qpair failed and we were unable to recover it. 00:22:28.547 [2024-05-15 01:09:40.911363] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.547 [2024-05-15 01:09:40.911521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.547 [2024-05-15 01:09:40.911547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.547 [2024-05-15 01:09:40.911562] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.547 [2024-05-15 01:09:40.911574] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.547 [2024-05-15 01:09:40.911601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.547 qpair failed and we were unable to recover it. 00:22:28.547 [2024-05-15 01:09:40.921412] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.547 [2024-05-15 01:09:40.921640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.547 [2024-05-15 01:09:40.921665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.547 [2024-05-15 01:09:40.921680] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.547 [2024-05-15 01:09:40.921693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.547 [2024-05-15 01:09:40.921726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.547 qpair failed and we were unable to recover it. 00:22:28.547 [2024-05-15 01:09:40.931431] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.547 [2024-05-15 01:09:40.931584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.547 [2024-05-15 01:09:40.931609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.547 [2024-05-15 01:09:40.931624] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.547 [2024-05-15 01:09:40.931636] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.547 [2024-05-15 01:09:40.931664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.547 qpair failed and we were unable to recover it. 00:22:28.807 [2024-05-15 01:09:40.941484] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.807 [2024-05-15 01:09:40.941651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.807 [2024-05-15 01:09:40.941676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.807 [2024-05-15 01:09:40.941691] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.807 [2024-05-15 01:09:40.941703] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.807 [2024-05-15 01:09:40.941731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.807 qpair failed and we were unable to recover it. 00:22:28.807 [2024-05-15 01:09:40.951492] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.807 [2024-05-15 01:09:40.951655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.807 [2024-05-15 01:09:40.951681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.807 [2024-05-15 01:09:40.951696] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.807 [2024-05-15 01:09:40.951708] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.807 [2024-05-15 01:09:40.951736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.807 qpair failed and we were unable to recover it. 00:22:28.807 [2024-05-15 01:09:40.961537] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.807 [2024-05-15 01:09:40.961708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.807 [2024-05-15 01:09:40.961733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.807 [2024-05-15 01:09:40.961748] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.807 [2024-05-15 01:09:40.961760] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.807 [2024-05-15 01:09:40.961788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.807 qpair failed and we were unable to recover it. 00:22:28.807 [2024-05-15 01:09:40.971524] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.807 [2024-05-15 01:09:40.971689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.807 [2024-05-15 01:09:40.971719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.807 [2024-05-15 01:09:40.971735] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.807 [2024-05-15 01:09:40.971747] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.807 [2024-05-15 01:09:40.971775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.807 qpair failed and we were unable to recover it. 00:22:28.807 [2024-05-15 01:09:40.981592] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.807 [2024-05-15 01:09:40.981773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.807 [2024-05-15 01:09:40.981798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.807 [2024-05-15 01:09:40.981812] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.807 [2024-05-15 01:09:40.981824] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.807 [2024-05-15 01:09:40.981852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.807 qpair failed and we were unable to recover it. 00:22:28.807 [2024-05-15 01:09:40.991580] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.807 [2024-05-15 01:09:40.991753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.807 [2024-05-15 01:09:40.991778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.807 [2024-05-15 01:09:40.991793] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.807 [2024-05-15 01:09:40.991805] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.807 [2024-05-15 01:09:40.991833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.807 qpair failed and we were unable to recover it. 00:22:28.807 [2024-05-15 01:09:41.001632] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.807 [2024-05-15 01:09:41.001826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.807 [2024-05-15 01:09:41.001851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.807 [2024-05-15 01:09:41.001866] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.807 [2024-05-15 01:09:41.001878] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.807 [2024-05-15 01:09:41.001906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.807 qpair failed and we were unable to recover it. 00:22:28.807 [2024-05-15 01:09:41.011689] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.807 [2024-05-15 01:09:41.011876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.807 [2024-05-15 01:09:41.011902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.807 [2024-05-15 01:09:41.011916] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.807 [2024-05-15 01:09:41.011940] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.807 [2024-05-15 01:09:41.011970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.807 qpair failed and we were unable to recover it. 00:22:28.807 [2024-05-15 01:09:41.021668] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.807 [2024-05-15 01:09:41.021833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.807 [2024-05-15 01:09:41.021858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.807 [2024-05-15 01:09:41.021873] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.807 [2024-05-15 01:09:41.021885] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.807 [2024-05-15 01:09:41.021912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.807 qpair failed and we were unable to recover it. 00:22:28.807 [2024-05-15 01:09:41.031751] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.808 [2024-05-15 01:09:41.031970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.808 [2024-05-15 01:09:41.031996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.808 [2024-05-15 01:09:41.032010] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.808 [2024-05-15 01:09:41.032022] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.808 [2024-05-15 01:09:41.032050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.808 qpair failed and we were unable to recover it. 00:22:28.808 [2024-05-15 01:09:41.041818] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.808 [2024-05-15 01:09:41.042043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.808 [2024-05-15 01:09:41.042068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.808 [2024-05-15 01:09:41.042082] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.808 [2024-05-15 01:09:41.042094] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.808 [2024-05-15 01:09:41.042123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.808 qpair failed and we were unable to recover it. 00:22:28.808 [2024-05-15 01:09:41.051809] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.808 [2024-05-15 01:09:41.052007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.808 [2024-05-15 01:09:41.052032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.808 [2024-05-15 01:09:41.052047] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.808 [2024-05-15 01:09:41.052059] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.808 [2024-05-15 01:09:41.052087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.808 qpair failed and we were unable to recover it. 00:22:28.808 [2024-05-15 01:09:41.061822] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.808 [2024-05-15 01:09:41.062057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.808 [2024-05-15 01:09:41.062083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.808 [2024-05-15 01:09:41.062098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.808 [2024-05-15 01:09:41.062110] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.808 [2024-05-15 01:09:41.062139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.808 qpair failed and we were unable to recover it. 00:22:28.808 [2024-05-15 01:09:41.071827] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.808 [2024-05-15 01:09:41.071992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.808 [2024-05-15 01:09:41.072017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.808 [2024-05-15 01:09:41.072032] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.808 [2024-05-15 01:09:41.072045] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.808 [2024-05-15 01:09:41.072073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.808 qpair failed and we were unable to recover it. 00:22:28.808 [2024-05-15 01:09:41.081845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.808 [2024-05-15 01:09:41.082019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.808 [2024-05-15 01:09:41.082046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.808 [2024-05-15 01:09:41.082061] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.808 [2024-05-15 01:09:41.082074] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.808 [2024-05-15 01:09:41.082102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.808 qpair failed and we were unable to recover it. 00:22:28.808 [2024-05-15 01:09:41.091876] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.808 [2024-05-15 01:09:41.092052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.808 [2024-05-15 01:09:41.092078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.808 [2024-05-15 01:09:41.092093] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.808 [2024-05-15 01:09:41.092106] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.808 [2024-05-15 01:09:41.092134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.808 qpair failed and we were unable to recover it. 00:22:28.808 [2024-05-15 01:09:41.101917] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.808 [2024-05-15 01:09:41.102096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.808 [2024-05-15 01:09:41.102121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.808 [2024-05-15 01:09:41.102136] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.808 [2024-05-15 01:09:41.102153] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.808 [2024-05-15 01:09:41.102182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.808 qpair failed and we were unable to recover it. 00:22:28.808 [2024-05-15 01:09:41.111948] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.808 [2024-05-15 01:09:41.112153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.808 [2024-05-15 01:09:41.112178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.808 [2024-05-15 01:09:41.112193] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.808 [2024-05-15 01:09:41.112205] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.808 [2024-05-15 01:09:41.112233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.808 qpair failed and we were unable to recover it. 00:22:28.808 [2024-05-15 01:09:41.121981] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.808 [2024-05-15 01:09:41.122142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.808 [2024-05-15 01:09:41.122168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.808 [2024-05-15 01:09:41.122182] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.808 [2024-05-15 01:09:41.122195] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.808 [2024-05-15 01:09:41.122223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.808 qpair failed and we were unable to recover it. 00:22:28.808 [2024-05-15 01:09:41.132034] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.808 [2024-05-15 01:09:41.132210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.808 [2024-05-15 01:09:41.132241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.808 [2024-05-15 01:09:41.132256] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.808 [2024-05-15 01:09:41.132269] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.808 [2024-05-15 01:09:41.132296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.808 qpair failed and we were unable to recover it. 00:22:28.808 [2024-05-15 01:09:41.142053] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.808 [2024-05-15 01:09:41.142223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.808 [2024-05-15 01:09:41.142256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.808 [2024-05-15 01:09:41.142271] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.808 [2024-05-15 01:09:41.142284] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.808 [2024-05-15 01:09:41.142312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.808 qpair failed and we were unable to recover it. 00:22:28.808 [2024-05-15 01:09:41.152031] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.808 [2024-05-15 01:09:41.152196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.808 [2024-05-15 01:09:41.152221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.808 [2024-05-15 01:09:41.152246] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.808 [2024-05-15 01:09:41.152258] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.808 [2024-05-15 01:09:41.152285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.808 qpair failed and we were unable to recover it. 00:22:28.808 [2024-05-15 01:09:41.162059] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.808 [2024-05-15 01:09:41.162228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.808 [2024-05-15 01:09:41.162253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.808 [2024-05-15 01:09:41.162268] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.808 [2024-05-15 01:09:41.162280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.808 [2024-05-15 01:09:41.162308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.808 qpair failed and we were unable to recover it. 00:22:28.809 [2024-05-15 01:09:41.172134] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.809 [2024-05-15 01:09:41.172301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.809 [2024-05-15 01:09:41.172326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.809 [2024-05-15 01:09:41.172341] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.809 [2024-05-15 01:09:41.172354] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.809 [2024-05-15 01:09:41.172381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.809 qpair failed and we were unable to recover it. 00:22:28.809 [2024-05-15 01:09:41.182157] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.809 [2024-05-15 01:09:41.182326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.809 [2024-05-15 01:09:41.182351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.809 [2024-05-15 01:09:41.182366] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.809 [2024-05-15 01:09:41.182378] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.809 [2024-05-15 01:09:41.182406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.809 qpair failed and we were unable to recover it. 00:22:28.809 [2024-05-15 01:09:41.192191] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:28.809 [2024-05-15 01:09:41.192355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:28.809 [2024-05-15 01:09:41.192381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:28.809 [2024-05-15 01:09:41.192396] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:28.809 [2024-05-15 01:09:41.192414] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:28.809 [2024-05-15 01:09:41.192442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:28.809 qpair failed and we were unable to recover it. 00:22:29.067 [2024-05-15 01:09:41.202219] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.067 [2024-05-15 01:09:41.202404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.067 [2024-05-15 01:09:41.202429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.067 [2024-05-15 01:09:41.202445] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.067 [2024-05-15 01:09:41.202457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.067 [2024-05-15 01:09:41.202485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.067 qpair failed and we were unable to recover it. 00:22:29.067 [2024-05-15 01:09:41.212281] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.067 [2024-05-15 01:09:41.212443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.067 [2024-05-15 01:09:41.212469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.067 [2024-05-15 01:09:41.212483] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.067 [2024-05-15 01:09:41.212496] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.067 [2024-05-15 01:09:41.212524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.067 qpair failed and we were unable to recover it. 00:22:29.067 [2024-05-15 01:09:41.222281] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.068 [2024-05-15 01:09:41.222449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.068 [2024-05-15 01:09:41.222474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.068 [2024-05-15 01:09:41.222489] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.068 [2024-05-15 01:09:41.222501] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.068 [2024-05-15 01:09:41.222529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.068 qpair failed and we were unable to recover it. 00:22:29.068 [2024-05-15 01:09:41.232322] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.068 [2024-05-15 01:09:41.232510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.068 [2024-05-15 01:09:41.232535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.068 [2024-05-15 01:09:41.232551] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.068 [2024-05-15 01:09:41.232563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.068 [2024-05-15 01:09:41.232591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.068 qpair failed and we were unable to recover it. 00:22:29.068 [2024-05-15 01:09:41.242305] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.068 [2024-05-15 01:09:41.242469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.068 [2024-05-15 01:09:41.242494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.068 [2024-05-15 01:09:41.242509] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.068 [2024-05-15 01:09:41.242521] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.068 [2024-05-15 01:09:41.242548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.068 qpair failed and we were unable to recover it. 00:22:29.068 [2024-05-15 01:09:41.252369] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.068 [2024-05-15 01:09:41.252531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.068 [2024-05-15 01:09:41.252557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.068 [2024-05-15 01:09:41.252572] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.068 [2024-05-15 01:09:41.252584] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.068 [2024-05-15 01:09:41.252612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.068 qpair failed and we were unable to recover it. 00:22:29.068 [2024-05-15 01:09:41.262360] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.068 [2024-05-15 01:09:41.262541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.068 [2024-05-15 01:09:41.262566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.068 [2024-05-15 01:09:41.262580] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.068 [2024-05-15 01:09:41.262593] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.068 [2024-05-15 01:09:41.262621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.068 qpair failed and we were unable to recover it. 00:22:29.068 [2024-05-15 01:09:41.272377] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.068 [2024-05-15 01:09:41.272541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.068 [2024-05-15 01:09:41.272566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.068 [2024-05-15 01:09:41.272580] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.068 [2024-05-15 01:09:41.272592] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.068 [2024-05-15 01:09:41.272619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.068 qpair failed and we were unable to recover it. 00:22:29.068 [2024-05-15 01:09:41.282412] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.068 [2024-05-15 01:09:41.282568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.068 [2024-05-15 01:09:41.282593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.068 [2024-05-15 01:09:41.282614] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.068 [2024-05-15 01:09:41.282627] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.068 [2024-05-15 01:09:41.282654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.068 qpair failed and we were unable to recover it. 00:22:29.068 [2024-05-15 01:09:41.292497] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.068 [2024-05-15 01:09:41.292690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.068 [2024-05-15 01:09:41.292717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.068 [2024-05-15 01:09:41.292732] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.068 [2024-05-15 01:09:41.292745] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.068 [2024-05-15 01:09:41.292774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.068 qpair failed and we were unable to recover it. 00:22:29.068 [2024-05-15 01:09:41.302476] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.068 [2024-05-15 01:09:41.302677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.068 [2024-05-15 01:09:41.302702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.068 [2024-05-15 01:09:41.302716] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.068 [2024-05-15 01:09:41.302728] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.068 [2024-05-15 01:09:41.302757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.068 qpair failed and we were unable to recover it. 00:22:29.068 [2024-05-15 01:09:41.312489] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.068 [2024-05-15 01:09:41.312671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.068 [2024-05-15 01:09:41.312697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.068 [2024-05-15 01:09:41.312711] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.068 [2024-05-15 01:09:41.312723] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.068 [2024-05-15 01:09:41.312751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.068 qpair failed and we were unable to recover it. 00:22:29.068 [2024-05-15 01:09:41.322551] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.068 [2024-05-15 01:09:41.322715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.068 [2024-05-15 01:09:41.322741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.068 [2024-05-15 01:09:41.322756] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.068 [2024-05-15 01:09:41.322768] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.068 [2024-05-15 01:09:41.322796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.068 qpair failed and we were unable to recover it. 00:22:29.068 [2024-05-15 01:09:41.332579] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.068 [2024-05-15 01:09:41.332740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.068 [2024-05-15 01:09:41.332766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.068 [2024-05-15 01:09:41.332781] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.068 [2024-05-15 01:09:41.332794] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.068 [2024-05-15 01:09:41.332822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.068 qpair failed and we were unable to recover it. 00:22:29.068 [2024-05-15 01:09:41.342608] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.068 [2024-05-15 01:09:41.342773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.068 [2024-05-15 01:09:41.342798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.068 [2024-05-15 01:09:41.342813] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.068 [2024-05-15 01:09:41.342825] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.068 [2024-05-15 01:09:41.342853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.068 qpair failed and we were unable to recover it. 00:22:29.068 [2024-05-15 01:09:41.352618] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.068 [2024-05-15 01:09:41.352783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.068 [2024-05-15 01:09:41.352809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.068 [2024-05-15 01:09:41.352823] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.068 [2024-05-15 01:09:41.352835] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.068 [2024-05-15 01:09:41.352863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.069 qpair failed and we were unable to recover it. 00:22:29.069 [2024-05-15 01:09:41.362629] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.069 [2024-05-15 01:09:41.362788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.069 [2024-05-15 01:09:41.362813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.069 [2024-05-15 01:09:41.362828] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.069 [2024-05-15 01:09:41.362840] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.069 [2024-05-15 01:09:41.362868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.069 qpair failed and we were unable to recover it. 00:22:29.069 [2024-05-15 01:09:41.372672] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.069 [2024-05-15 01:09:41.372872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.069 [2024-05-15 01:09:41.372898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.069 [2024-05-15 01:09:41.372921] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.069 [2024-05-15 01:09:41.372942] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.069 [2024-05-15 01:09:41.372971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.069 qpair failed and we were unable to recover it. 00:22:29.069 [2024-05-15 01:09:41.382716] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.069 [2024-05-15 01:09:41.382893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.069 [2024-05-15 01:09:41.382918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.069 [2024-05-15 01:09:41.382942] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.069 [2024-05-15 01:09:41.382956] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.069 [2024-05-15 01:09:41.382984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.069 qpair failed and we were unable to recover it. 00:22:29.069 [2024-05-15 01:09:41.392742] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.069 [2024-05-15 01:09:41.392918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.069 [2024-05-15 01:09:41.392950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.069 [2024-05-15 01:09:41.392965] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.069 [2024-05-15 01:09:41.392978] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.069 [2024-05-15 01:09:41.393006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.069 qpair failed and we were unable to recover it. 00:22:29.069 [2024-05-15 01:09:41.402748] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.069 [2024-05-15 01:09:41.402928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.069 [2024-05-15 01:09:41.402960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.069 [2024-05-15 01:09:41.402975] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.069 [2024-05-15 01:09:41.402987] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.069 [2024-05-15 01:09:41.403016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.069 qpair failed and we were unable to recover it. 00:22:29.069 [2024-05-15 01:09:41.412772] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.069 [2024-05-15 01:09:41.412943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.069 [2024-05-15 01:09:41.412968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.069 [2024-05-15 01:09:41.412983] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.069 [2024-05-15 01:09:41.412995] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.069 [2024-05-15 01:09:41.413023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.069 qpair failed and we were unable to recover it. 00:22:29.069 [2024-05-15 01:09:41.422836] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.069 [2024-05-15 01:09:41.423011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.069 [2024-05-15 01:09:41.423037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.069 [2024-05-15 01:09:41.423052] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.069 [2024-05-15 01:09:41.423064] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.069 [2024-05-15 01:09:41.423092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.069 qpair failed and we were unable to recover it. 00:22:29.069 [2024-05-15 01:09:41.432832] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.069 [2024-05-15 01:09:41.432995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.069 [2024-05-15 01:09:41.433021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.069 [2024-05-15 01:09:41.433036] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.069 [2024-05-15 01:09:41.433049] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.069 [2024-05-15 01:09:41.433077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.069 qpair failed and we were unable to recover it. 00:22:29.069 [2024-05-15 01:09:41.442859] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.069 [2024-05-15 01:09:41.443022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.069 [2024-05-15 01:09:41.443047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.069 [2024-05-15 01:09:41.443061] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.069 [2024-05-15 01:09:41.443073] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.069 [2024-05-15 01:09:41.443101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.069 qpair failed and we were unable to recover it. 00:22:29.069 [2024-05-15 01:09:41.452881] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.069 [2024-05-15 01:09:41.453036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.069 [2024-05-15 01:09:41.453062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.069 [2024-05-15 01:09:41.453076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.069 [2024-05-15 01:09:41.453088] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.069 [2024-05-15 01:09:41.453116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.069 qpair failed and we were unable to recover it. 00:22:29.327 [2024-05-15 01:09:41.462952] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.327 [2024-05-15 01:09:41.463130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.327 [2024-05-15 01:09:41.463156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.327 [2024-05-15 01:09:41.463176] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.327 [2024-05-15 01:09:41.463189] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.327 [2024-05-15 01:09:41.463217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.327 qpair failed and we were unable to recover it. 00:22:29.327 [2024-05-15 01:09:41.472983] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.327 [2024-05-15 01:09:41.473166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.327 [2024-05-15 01:09:41.473190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.327 [2024-05-15 01:09:41.473204] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.327 [2024-05-15 01:09:41.473217] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.327 [2024-05-15 01:09:41.473245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.327 qpair failed and we were unable to recover it. 00:22:29.327 [2024-05-15 01:09:41.482992] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.327 [2024-05-15 01:09:41.483214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.327 [2024-05-15 01:09:41.483239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.327 [2024-05-15 01:09:41.483254] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.327 [2024-05-15 01:09:41.483266] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.327 [2024-05-15 01:09:41.483293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.327 qpair failed and we were unable to recover it. 00:22:29.327 [2024-05-15 01:09:41.493023] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.327 [2024-05-15 01:09:41.493214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.327 [2024-05-15 01:09:41.493240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.327 [2024-05-15 01:09:41.493255] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.327 [2024-05-15 01:09:41.493267] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.328 [2024-05-15 01:09:41.493295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.328 qpair failed and we were unable to recover it. 00:22:29.328 [2024-05-15 01:09:41.503053] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.328 [2024-05-15 01:09:41.503232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.328 [2024-05-15 01:09:41.503257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.328 [2024-05-15 01:09:41.503272] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.328 [2024-05-15 01:09:41.503284] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.328 [2024-05-15 01:09:41.503312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.328 qpair failed and we were unable to recover it. 00:22:29.328 [2024-05-15 01:09:41.513050] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.328 [2024-05-15 01:09:41.513209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.328 [2024-05-15 01:09:41.513234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.328 [2024-05-15 01:09:41.513249] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.328 [2024-05-15 01:09:41.513261] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.328 [2024-05-15 01:09:41.513289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.328 qpair failed and we were unable to recover it. 00:22:29.328 [2024-05-15 01:09:41.523105] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.328 [2024-05-15 01:09:41.523267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.328 [2024-05-15 01:09:41.523292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.328 [2024-05-15 01:09:41.523307] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.328 [2024-05-15 01:09:41.523319] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.328 [2024-05-15 01:09:41.523347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.328 qpair failed and we were unable to recover it. 00:22:29.328 [2024-05-15 01:09:41.533116] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.328 [2024-05-15 01:09:41.533276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.328 [2024-05-15 01:09:41.533301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.328 [2024-05-15 01:09:41.533315] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.328 [2024-05-15 01:09:41.533328] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.328 [2024-05-15 01:09:41.533356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.328 qpair failed and we were unable to recover it. 00:22:29.328 [2024-05-15 01:09:41.543156] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.328 [2024-05-15 01:09:41.543317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.328 [2024-05-15 01:09:41.543342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.328 [2024-05-15 01:09:41.543357] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.328 [2024-05-15 01:09:41.543369] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.328 [2024-05-15 01:09:41.543397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.328 qpair failed and we were unable to recover it. 00:22:29.328 [2024-05-15 01:09:41.553181] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.328 [2024-05-15 01:09:41.553335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.328 [2024-05-15 01:09:41.553365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.328 [2024-05-15 01:09:41.553380] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.328 [2024-05-15 01:09:41.553393] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.328 [2024-05-15 01:09:41.553420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.328 qpair failed and we were unable to recover it. 00:22:29.328 [2024-05-15 01:09:41.563278] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.328 [2024-05-15 01:09:41.563444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.328 [2024-05-15 01:09:41.563469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.328 [2024-05-15 01:09:41.563484] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.328 [2024-05-15 01:09:41.563496] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.328 [2024-05-15 01:09:41.563525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.328 qpair failed and we were unable to recover it. 00:22:29.328 [2024-05-15 01:09:41.573230] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.328 [2024-05-15 01:09:41.573390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.328 [2024-05-15 01:09:41.573416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.328 [2024-05-15 01:09:41.573431] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.328 [2024-05-15 01:09:41.573443] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.328 [2024-05-15 01:09:41.573470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.328 qpair failed and we were unable to recover it. 00:22:29.328 [2024-05-15 01:09:41.583273] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.328 [2024-05-15 01:09:41.583440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.328 [2024-05-15 01:09:41.583464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.328 [2024-05-15 01:09:41.583479] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.328 [2024-05-15 01:09:41.583491] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.328 [2024-05-15 01:09:41.583519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.328 qpair failed and we were unable to recover it. 00:22:29.328 [2024-05-15 01:09:41.593285] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.328 [2024-05-15 01:09:41.593442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.328 [2024-05-15 01:09:41.593468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.328 [2024-05-15 01:09:41.593482] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.328 [2024-05-15 01:09:41.593495] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.328 [2024-05-15 01:09:41.593528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.328 qpair failed and we were unable to recover it. 00:22:29.328 [2024-05-15 01:09:41.603327] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.328 [2024-05-15 01:09:41.603487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.328 [2024-05-15 01:09:41.603512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.328 [2024-05-15 01:09:41.603526] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.328 [2024-05-15 01:09:41.603538] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.328 [2024-05-15 01:09:41.603565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.328 qpair failed and we were unable to recover it. 00:22:29.328 [2024-05-15 01:09:41.613353] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.328 [2024-05-15 01:09:41.613510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.328 [2024-05-15 01:09:41.613536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.328 [2024-05-15 01:09:41.613551] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.328 [2024-05-15 01:09:41.613563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.328 [2024-05-15 01:09:41.613590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.328 qpair failed and we were unable to recover it. 00:22:29.328 [2024-05-15 01:09:41.623386] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.328 [2024-05-15 01:09:41.623564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.328 [2024-05-15 01:09:41.623589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.328 [2024-05-15 01:09:41.623603] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.328 [2024-05-15 01:09:41.623616] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.328 [2024-05-15 01:09:41.623643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.328 qpair failed and we were unable to recover it. 00:22:29.328 [2024-05-15 01:09:41.633415] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.328 [2024-05-15 01:09:41.633587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.328 [2024-05-15 01:09:41.633612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.328 [2024-05-15 01:09:41.633627] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.328 [2024-05-15 01:09:41.633639] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.329 [2024-05-15 01:09:41.633667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.329 qpair failed and we were unable to recover it. 00:22:29.329 [2024-05-15 01:09:41.643440] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.329 [2024-05-15 01:09:41.643602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.329 [2024-05-15 01:09:41.643632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.329 [2024-05-15 01:09:41.643648] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.329 [2024-05-15 01:09:41.643660] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.329 [2024-05-15 01:09:41.643688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.329 qpair failed and we were unable to recover it. 00:22:29.329 [2024-05-15 01:09:41.653474] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.329 [2024-05-15 01:09:41.653637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.329 [2024-05-15 01:09:41.653662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.329 [2024-05-15 01:09:41.653676] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.329 [2024-05-15 01:09:41.653688] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.329 [2024-05-15 01:09:41.653716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.329 qpair failed and we were unable to recover it. 00:22:29.329 [2024-05-15 01:09:41.663551] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.329 [2024-05-15 01:09:41.663737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.329 [2024-05-15 01:09:41.663763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.329 [2024-05-15 01:09:41.663778] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.329 [2024-05-15 01:09:41.663790] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.329 [2024-05-15 01:09:41.663817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.329 qpair failed and we were unable to recover it. 00:22:29.329 [2024-05-15 01:09:41.673530] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.329 [2024-05-15 01:09:41.673691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.329 [2024-05-15 01:09:41.673716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.329 [2024-05-15 01:09:41.673731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.329 [2024-05-15 01:09:41.673743] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.329 [2024-05-15 01:09:41.673771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.329 qpair failed and we were unable to recover it. 00:22:29.329 [2024-05-15 01:09:41.683601] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.329 [2024-05-15 01:09:41.683761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.329 [2024-05-15 01:09:41.683786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.329 [2024-05-15 01:09:41.683800] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.329 [2024-05-15 01:09:41.683812] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.329 [2024-05-15 01:09:41.683845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.329 qpair failed and we were unable to recover it. 00:22:29.329 [2024-05-15 01:09:41.693608] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.329 [2024-05-15 01:09:41.693803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.329 [2024-05-15 01:09:41.693828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.329 [2024-05-15 01:09:41.693842] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.329 [2024-05-15 01:09:41.693855] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.329 [2024-05-15 01:09:41.693882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.329 qpair failed and we were unable to recover it. 00:22:29.329 [2024-05-15 01:09:41.703617] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.329 [2024-05-15 01:09:41.703777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.329 [2024-05-15 01:09:41.703802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.329 [2024-05-15 01:09:41.703817] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.329 [2024-05-15 01:09:41.703829] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.329 [2024-05-15 01:09:41.703857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.329 qpair failed and we were unable to recover it. 00:22:29.329 [2024-05-15 01:09:41.713688] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.329 [2024-05-15 01:09:41.713852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.329 [2024-05-15 01:09:41.713878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.329 [2024-05-15 01:09:41.713892] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.329 [2024-05-15 01:09:41.713904] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.329 [2024-05-15 01:09:41.713939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.329 qpair failed and we were unable to recover it. 00:22:29.588 [2024-05-15 01:09:41.723686] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.588 [2024-05-15 01:09:41.723857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.588 [2024-05-15 01:09:41.723882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.588 [2024-05-15 01:09:41.723897] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.588 [2024-05-15 01:09:41.723910] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.588 [2024-05-15 01:09:41.723945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.588 qpair failed and we were unable to recover it. 00:22:29.588 [2024-05-15 01:09:41.733707] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.588 [2024-05-15 01:09:41.733913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.588 [2024-05-15 01:09:41.733951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.588 [2024-05-15 01:09:41.733967] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.588 [2024-05-15 01:09:41.733979] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.588 [2024-05-15 01:09:41.734006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.588 qpair failed and we were unable to recover it. 00:22:29.588 [2024-05-15 01:09:41.743750] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.588 [2024-05-15 01:09:41.743949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.588 [2024-05-15 01:09:41.743974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.588 [2024-05-15 01:09:41.743988] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.588 [2024-05-15 01:09:41.744000] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.588 [2024-05-15 01:09:41.744027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.588 qpair failed and we were unable to recover it. 00:22:29.588 [2024-05-15 01:09:41.753761] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.588 [2024-05-15 01:09:41.753939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.588 [2024-05-15 01:09:41.753965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.588 [2024-05-15 01:09:41.753980] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.588 [2024-05-15 01:09:41.753992] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.588 [2024-05-15 01:09:41.754020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.588 qpair failed and we were unable to recover it. 00:22:29.588 [2024-05-15 01:09:41.763835] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.588 [2024-05-15 01:09:41.764042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.588 [2024-05-15 01:09:41.764068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.588 [2024-05-15 01:09:41.764082] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.588 [2024-05-15 01:09:41.764094] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.588 [2024-05-15 01:09:41.764123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.588 qpair failed and we were unable to recover it. 00:22:29.588 [2024-05-15 01:09:41.773832] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.588 [2024-05-15 01:09:41.774003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.588 [2024-05-15 01:09:41.774028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.588 [2024-05-15 01:09:41.774043] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.588 [2024-05-15 01:09:41.774055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.588 [2024-05-15 01:09:41.774089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.588 qpair failed and we were unable to recover it. 00:22:29.588 [2024-05-15 01:09:41.783864] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.588 [2024-05-15 01:09:41.784034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.588 [2024-05-15 01:09:41.784060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.588 [2024-05-15 01:09:41.784074] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.588 [2024-05-15 01:09:41.784087] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.588 [2024-05-15 01:09:41.784115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.588 qpair failed and we were unable to recover it. 00:22:29.588 [2024-05-15 01:09:41.793880] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.588 [2024-05-15 01:09:41.794050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.588 [2024-05-15 01:09:41.794076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.588 [2024-05-15 01:09:41.794090] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.588 [2024-05-15 01:09:41.794102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.588 [2024-05-15 01:09:41.794130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.588 qpair failed and we were unable to recover it. 00:22:29.588 [2024-05-15 01:09:41.803907] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.588 [2024-05-15 01:09:41.804072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.588 [2024-05-15 01:09:41.804096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.588 [2024-05-15 01:09:41.804111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.588 [2024-05-15 01:09:41.804123] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.588 [2024-05-15 01:09:41.804151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.588 qpair failed and we were unable to recover it. 00:22:29.588 [2024-05-15 01:09:41.813923] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.588 [2024-05-15 01:09:41.814088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.588 [2024-05-15 01:09:41.814113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.588 [2024-05-15 01:09:41.814128] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.588 [2024-05-15 01:09:41.814140] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.588 [2024-05-15 01:09:41.814167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.588 qpair failed and we were unable to recover it. 00:22:29.588 [2024-05-15 01:09:41.823992] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.588 [2024-05-15 01:09:41.824166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.588 [2024-05-15 01:09:41.824196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.588 [2024-05-15 01:09:41.824212] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.588 [2024-05-15 01:09:41.824224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.588 [2024-05-15 01:09:41.824252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.588 qpair failed and we were unable to recover it. 00:22:29.588 [2024-05-15 01:09:41.834004] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.588 [2024-05-15 01:09:41.834162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.588 [2024-05-15 01:09:41.834188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.588 [2024-05-15 01:09:41.834202] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.588 [2024-05-15 01:09:41.834215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.588 [2024-05-15 01:09:41.834242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.588 qpair failed and we were unable to recover it. 00:22:29.588 [2024-05-15 01:09:41.844022] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.588 [2024-05-15 01:09:41.844178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.588 [2024-05-15 01:09:41.844202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.588 [2024-05-15 01:09:41.844217] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.588 [2024-05-15 01:09:41.844229] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.588 [2024-05-15 01:09:41.844257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.588 qpair failed and we were unable to recover it. 00:22:29.588 [2024-05-15 01:09:41.854103] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.588 [2024-05-15 01:09:41.854265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.589 [2024-05-15 01:09:41.854290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.589 [2024-05-15 01:09:41.854305] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.589 [2024-05-15 01:09:41.854317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.589 [2024-05-15 01:09:41.854345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.589 qpair failed and we were unable to recover it. 00:22:29.589 [2024-05-15 01:09:41.864105] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.589 [2024-05-15 01:09:41.864270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.589 [2024-05-15 01:09:41.864295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.589 [2024-05-15 01:09:41.864309] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.589 [2024-05-15 01:09:41.864327] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.589 [2024-05-15 01:09:41.864356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.589 qpair failed and we were unable to recover it. 00:22:29.589 [2024-05-15 01:09:41.874163] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.589 [2024-05-15 01:09:41.874357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.589 [2024-05-15 01:09:41.874382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.589 [2024-05-15 01:09:41.874397] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.589 [2024-05-15 01:09:41.874409] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.589 [2024-05-15 01:09:41.874436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.589 qpair failed and we were unable to recover it. 00:22:29.589 [2024-05-15 01:09:41.884165] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.589 [2024-05-15 01:09:41.884358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.589 [2024-05-15 01:09:41.884383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.589 [2024-05-15 01:09:41.884398] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.589 [2024-05-15 01:09:41.884409] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.589 [2024-05-15 01:09:41.884437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.589 qpair failed and we were unable to recover it. 00:22:29.589 [2024-05-15 01:09:41.894195] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.589 [2024-05-15 01:09:41.894356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.589 [2024-05-15 01:09:41.894381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.589 [2024-05-15 01:09:41.894395] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.589 [2024-05-15 01:09:41.894407] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.589 [2024-05-15 01:09:41.894435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.589 qpair failed and we were unable to recover it. 00:22:29.589 [2024-05-15 01:09:41.904218] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.589 [2024-05-15 01:09:41.904412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.589 [2024-05-15 01:09:41.904437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.589 [2024-05-15 01:09:41.904452] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.589 [2024-05-15 01:09:41.904464] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.589 [2024-05-15 01:09:41.904491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.589 qpair failed and we were unable to recover it. 00:22:29.589 [2024-05-15 01:09:41.914219] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.589 [2024-05-15 01:09:41.914385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.589 [2024-05-15 01:09:41.914410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.589 [2024-05-15 01:09:41.914425] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.589 [2024-05-15 01:09:41.914437] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.589 [2024-05-15 01:09:41.914464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.589 qpair failed and we were unable to recover it. 00:22:29.589 [2024-05-15 01:09:41.924271] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.589 [2024-05-15 01:09:41.924435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.589 [2024-05-15 01:09:41.924461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.589 [2024-05-15 01:09:41.924476] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.589 [2024-05-15 01:09:41.924488] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.589 [2024-05-15 01:09:41.924516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.589 qpair failed and we were unable to recover it. 00:22:29.589 [2024-05-15 01:09:41.934273] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.589 [2024-05-15 01:09:41.934428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.589 [2024-05-15 01:09:41.934453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.589 [2024-05-15 01:09:41.934468] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.589 [2024-05-15 01:09:41.934480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.589 [2024-05-15 01:09:41.934508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.589 qpair failed and we were unable to recover it. 00:22:29.589 [2024-05-15 01:09:41.944317] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.589 [2024-05-15 01:09:41.944524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.589 [2024-05-15 01:09:41.944549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.589 [2024-05-15 01:09:41.944564] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.589 [2024-05-15 01:09:41.944576] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.589 [2024-05-15 01:09:41.944604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.589 qpair failed and we were unable to recover it. 00:22:29.589 [2024-05-15 01:09:41.954391] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.589 [2024-05-15 01:09:41.954584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.589 [2024-05-15 01:09:41.954609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.589 [2024-05-15 01:09:41.954623] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.589 [2024-05-15 01:09:41.954641] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.589 [2024-05-15 01:09:41.954670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.589 qpair failed and we were unable to recover it. 00:22:29.589 [2024-05-15 01:09:41.964396] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.589 [2024-05-15 01:09:41.964559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.589 [2024-05-15 01:09:41.964584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.589 [2024-05-15 01:09:41.964598] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.589 [2024-05-15 01:09:41.964610] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.589 [2024-05-15 01:09:41.964638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.589 qpair failed and we were unable to recover it. 00:22:29.589 [2024-05-15 01:09:41.974424] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.589 [2024-05-15 01:09:41.974617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.589 [2024-05-15 01:09:41.974643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.589 [2024-05-15 01:09:41.974658] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.589 [2024-05-15 01:09:41.974674] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.589 [2024-05-15 01:09:41.974703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.589 qpair failed and we were unable to recover it. 00:22:29.847 [2024-05-15 01:09:41.984446] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.847 [2024-05-15 01:09:41.984647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.847 [2024-05-15 01:09:41.984674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.847 [2024-05-15 01:09:41.984689] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.847 [2024-05-15 01:09:41.984701] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.848 [2024-05-15 01:09:41.984730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.848 qpair failed and we were unable to recover it. 00:22:29.848 [2024-05-15 01:09:41.994480] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.848 [2024-05-15 01:09:41.994656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.848 [2024-05-15 01:09:41.994681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.848 [2024-05-15 01:09:41.994696] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.848 [2024-05-15 01:09:41.994708] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.848 [2024-05-15 01:09:41.994736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.848 qpair failed and we were unable to recover it. 00:22:29.848 [2024-05-15 01:09:42.004494] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.848 [2024-05-15 01:09:42.004674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.848 [2024-05-15 01:09:42.004699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.848 [2024-05-15 01:09:42.004714] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.848 [2024-05-15 01:09:42.004727] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.848 [2024-05-15 01:09:42.004754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.848 qpair failed and we were unable to recover it. 00:22:29.848 [2024-05-15 01:09:42.014575] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.848 [2024-05-15 01:09:42.014786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.848 [2024-05-15 01:09:42.014812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.848 [2024-05-15 01:09:42.014827] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.848 [2024-05-15 01:09:42.014839] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.848 [2024-05-15 01:09:42.014866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.848 qpair failed and we were unable to recover it. 00:22:29.848 [2024-05-15 01:09:42.024554] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.848 [2024-05-15 01:09:42.024728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.848 [2024-05-15 01:09:42.024753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.848 [2024-05-15 01:09:42.024768] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.848 [2024-05-15 01:09:42.024780] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.848 [2024-05-15 01:09:42.024808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.848 qpair failed and we were unable to recover it. 00:22:29.848 [2024-05-15 01:09:42.034577] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.848 [2024-05-15 01:09:42.034754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.848 [2024-05-15 01:09:42.034779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.848 [2024-05-15 01:09:42.034794] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.848 [2024-05-15 01:09:42.034806] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.848 [2024-05-15 01:09:42.034834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.848 qpair failed and we were unable to recover it. 00:22:29.848 [2024-05-15 01:09:42.044599] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.848 [2024-05-15 01:09:42.044779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.848 [2024-05-15 01:09:42.044804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.848 [2024-05-15 01:09:42.044824] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.848 [2024-05-15 01:09:42.044837] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.848 [2024-05-15 01:09:42.044865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.848 qpair failed and we were unable to recover it. 00:22:29.848 [2024-05-15 01:09:42.054630] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.848 [2024-05-15 01:09:42.054831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.848 [2024-05-15 01:09:42.054856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.848 [2024-05-15 01:09:42.054871] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.848 [2024-05-15 01:09:42.054884] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.848 [2024-05-15 01:09:42.054911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.848 qpair failed and we were unable to recover it. 00:22:29.848 [2024-05-15 01:09:42.064686] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.848 [2024-05-15 01:09:42.064856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.848 [2024-05-15 01:09:42.064881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.848 [2024-05-15 01:09:42.064896] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.848 [2024-05-15 01:09:42.064909] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.848 [2024-05-15 01:09:42.064943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.848 qpair failed and we were unable to recover it. 00:22:29.848 [2024-05-15 01:09:42.074665] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.848 [2024-05-15 01:09:42.074836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.848 [2024-05-15 01:09:42.074860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.848 [2024-05-15 01:09:42.074875] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.848 [2024-05-15 01:09:42.074887] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.848 [2024-05-15 01:09:42.074914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.848 qpair failed and we were unable to recover it. 00:22:29.848 [2024-05-15 01:09:42.084802] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.848 [2024-05-15 01:09:42.084976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.848 [2024-05-15 01:09:42.085001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.848 [2024-05-15 01:09:42.085016] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.848 [2024-05-15 01:09:42.085028] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.848 [2024-05-15 01:09:42.085056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.848 qpair failed and we were unable to recover it. 00:22:29.848 [2024-05-15 01:09:42.094738] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.848 [2024-05-15 01:09:42.094895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.848 [2024-05-15 01:09:42.094921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.848 [2024-05-15 01:09:42.094943] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.848 [2024-05-15 01:09:42.094956] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.848 [2024-05-15 01:09:42.094985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.848 qpair failed and we were unable to recover it. 00:22:29.848 [2024-05-15 01:09:42.104780] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.848 [2024-05-15 01:09:42.104948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.848 [2024-05-15 01:09:42.104974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.848 [2024-05-15 01:09:42.104989] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.848 [2024-05-15 01:09:42.105000] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.848 [2024-05-15 01:09:42.105029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.848 qpair failed and we were unable to recover it. 00:22:29.848 [2024-05-15 01:09:42.114819] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.848 [2024-05-15 01:09:42.114982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.848 [2024-05-15 01:09:42.115008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.848 [2024-05-15 01:09:42.115022] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.848 [2024-05-15 01:09:42.115035] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.848 [2024-05-15 01:09:42.115062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.848 qpair failed and we were unable to recover it. 00:22:29.848 [2024-05-15 01:09:42.124859] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.848 [2024-05-15 01:09:42.125024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.848 [2024-05-15 01:09:42.125049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.848 [2024-05-15 01:09:42.125064] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.848 [2024-05-15 01:09:42.125076] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.848 [2024-05-15 01:09:42.125104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.848 qpair failed and we were unable to recover it. 00:22:29.848 [2024-05-15 01:09:42.134851] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.848 [2024-05-15 01:09:42.135037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.848 [2024-05-15 01:09:42.135062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.848 [2024-05-15 01:09:42.135085] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.848 [2024-05-15 01:09:42.135098] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.848 [2024-05-15 01:09:42.135126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.848 qpair failed and we were unable to recover it. 00:22:29.848 [2024-05-15 01:09:42.144902] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.848 [2024-05-15 01:09:42.145110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.848 [2024-05-15 01:09:42.145135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.848 [2024-05-15 01:09:42.145150] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.848 [2024-05-15 01:09:42.145163] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.848 [2024-05-15 01:09:42.145190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.848 qpair failed and we were unable to recover it. 00:22:29.848 [2024-05-15 01:09:42.154899] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.848 [2024-05-15 01:09:42.155071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.848 [2024-05-15 01:09:42.155096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.848 [2024-05-15 01:09:42.155110] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.848 [2024-05-15 01:09:42.155122] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.848 [2024-05-15 01:09:42.155149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.848 qpair failed and we were unable to recover it. 00:22:29.849 [2024-05-15 01:09:42.164972] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.849 [2024-05-15 01:09:42.165135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.849 [2024-05-15 01:09:42.165160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.849 [2024-05-15 01:09:42.165175] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.849 [2024-05-15 01:09:42.165193] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.849 [2024-05-15 01:09:42.165220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.849 qpair failed and we were unable to recover it. 00:22:29.849 [2024-05-15 01:09:42.174987] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.849 [2024-05-15 01:09:42.175172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.849 [2024-05-15 01:09:42.175199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.849 [2024-05-15 01:09:42.175214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.849 [2024-05-15 01:09:42.175226] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.849 [2024-05-15 01:09:42.175254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.849 qpair failed and we were unable to recover it. 00:22:29.849 [2024-05-15 01:09:42.185016] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.849 [2024-05-15 01:09:42.185184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.849 [2024-05-15 01:09:42.185210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.849 [2024-05-15 01:09:42.185224] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.849 [2024-05-15 01:09:42.185236] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.849 [2024-05-15 01:09:42.185264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.849 qpair failed and we were unable to recover it. 00:22:29.849 [2024-05-15 01:09:42.195042] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.849 [2024-05-15 01:09:42.195236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.849 [2024-05-15 01:09:42.195263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.849 [2024-05-15 01:09:42.195278] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.849 [2024-05-15 01:09:42.195291] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.849 [2024-05-15 01:09:42.195319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.849 qpair failed and we were unable to recover it. 00:22:29.849 [2024-05-15 01:09:42.205080] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.849 [2024-05-15 01:09:42.205246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.849 [2024-05-15 01:09:42.205271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.849 [2024-05-15 01:09:42.205286] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.849 [2024-05-15 01:09:42.205298] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.849 [2024-05-15 01:09:42.205327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.849 qpair failed and we were unable to recover it. 00:22:29.849 [2024-05-15 01:09:42.215091] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.849 [2024-05-15 01:09:42.215257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.849 [2024-05-15 01:09:42.215282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.849 [2024-05-15 01:09:42.215297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.849 [2024-05-15 01:09:42.215309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.849 [2024-05-15 01:09:42.215336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.849 qpair failed and we were unable to recover it. 00:22:29.849 [2024-05-15 01:09:42.225137] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.849 [2024-05-15 01:09:42.225309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.849 [2024-05-15 01:09:42.225334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.849 [2024-05-15 01:09:42.225355] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.849 [2024-05-15 01:09:42.225368] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.849 [2024-05-15 01:09:42.225396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.849 qpair failed and we were unable to recover it. 00:22:29.849 [2024-05-15 01:09:42.235165] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:29.849 [2024-05-15 01:09:42.235343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:29.849 [2024-05-15 01:09:42.235368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:29.849 [2024-05-15 01:09:42.235382] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:29.849 [2024-05-15 01:09:42.235395] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:29.849 [2024-05-15 01:09:42.235422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:29.849 qpair failed and we were unable to recover it. 00:22:30.108 [2024-05-15 01:09:42.245179] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.108 [2024-05-15 01:09:42.245331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.108 [2024-05-15 01:09:42.245357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.108 [2024-05-15 01:09:42.245371] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.108 [2024-05-15 01:09:42.245383] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.108 [2024-05-15 01:09:42.245411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.108 qpair failed and we were unable to recover it. 00:22:30.108 [2024-05-15 01:09:42.255217] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.108 [2024-05-15 01:09:42.255380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.108 [2024-05-15 01:09:42.255405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.108 [2024-05-15 01:09:42.255421] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.108 [2024-05-15 01:09:42.255433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.108 [2024-05-15 01:09:42.255461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.108 qpair failed and we were unable to recover it. 00:22:30.108 [2024-05-15 01:09:42.265269] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.108 [2024-05-15 01:09:42.265435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.108 [2024-05-15 01:09:42.265461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.108 [2024-05-15 01:09:42.265477] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.108 [2024-05-15 01:09:42.265489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.108 [2024-05-15 01:09:42.265517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.108 qpair failed and we were unable to recover it. 00:22:30.108 [2024-05-15 01:09:42.275234] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.108 [2024-05-15 01:09:42.275394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.108 [2024-05-15 01:09:42.275420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.108 [2024-05-15 01:09:42.275435] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.108 [2024-05-15 01:09:42.275448] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.108 [2024-05-15 01:09:42.275475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.108 qpair failed and we were unable to recover it. 00:22:30.108 [2024-05-15 01:09:42.285291] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.108 [2024-05-15 01:09:42.285483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.108 [2024-05-15 01:09:42.285509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.108 [2024-05-15 01:09:42.285524] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.108 [2024-05-15 01:09:42.285537] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.108 [2024-05-15 01:09:42.285565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.108 qpair failed and we were unable to recover it. 00:22:30.108 [2024-05-15 01:09:42.295332] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.108 [2024-05-15 01:09:42.295492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.108 [2024-05-15 01:09:42.295518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.108 [2024-05-15 01:09:42.295533] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.108 [2024-05-15 01:09:42.295546] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.108 [2024-05-15 01:09:42.295574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.108 qpair failed and we were unable to recover it. 00:22:30.108 [2024-05-15 01:09:42.305341] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.109 [2024-05-15 01:09:42.305559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.109 [2024-05-15 01:09:42.305584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.109 [2024-05-15 01:09:42.305599] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.109 [2024-05-15 01:09:42.305611] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.109 [2024-05-15 01:09:42.305639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.109 qpair failed and we were unable to recover it. 00:22:30.109 [2024-05-15 01:09:42.315395] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.109 [2024-05-15 01:09:42.315582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.109 [2024-05-15 01:09:42.315612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.109 [2024-05-15 01:09:42.315628] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.109 [2024-05-15 01:09:42.315640] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.109 [2024-05-15 01:09:42.315667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.109 qpair failed and we were unable to recover it. 00:22:30.109 [2024-05-15 01:09:42.325460] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.109 [2024-05-15 01:09:42.325619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.109 [2024-05-15 01:09:42.325645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.109 [2024-05-15 01:09:42.325660] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.109 [2024-05-15 01:09:42.325672] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.109 [2024-05-15 01:09:42.325700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.109 qpair failed and we were unable to recover it. 00:22:30.109 [2024-05-15 01:09:42.335456] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.109 [2024-05-15 01:09:42.335612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.109 [2024-05-15 01:09:42.335637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.109 [2024-05-15 01:09:42.335652] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.109 [2024-05-15 01:09:42.335665] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.109 [2024-05-15 01:09:42.335693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.109 qpair failed and we were unable to recover it. 00:22:30.109 [2024-05-15 01:09:42.345485] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.109 [2024-05-15 01:09:42.345653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.109 [2024-05-15 01:09:42.345678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.109 [2024-05-15 01:09:42.345693] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.109 [2024-05-15 01:09:42.345705] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.109 [2024-05-15 01:09:42.345733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.109 qpair failed and we were unable to recover it. 00:22:30.109 [2024-05-15 01:09:42.355480] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.109 [2024-05-15 01:09:42.355643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.109 [2024-05-15 01:09:42.355668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.109 [2024-05-15 01:09:42.355683] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.109 [2024-05-15 01:09:42.355696] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.109 [2024-05-15 01:09:42.355729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.109 qpair failed and we were unable to recover it. 00:22:30.109 [2024-05-15 01:09:42.365517] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.109 [2024-05-15 01:09:42.365676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.109 [2024-05-15 01:09:42.365701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.109 [2024-05-15 01:09:42.365716] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.109 [2024-05-15 01:09:42.365728] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.109 [2024-05-15 01:09:42.365756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.109 qpair failed and we were unable to recover it. 00:22:30.109 [2024-05-15 01:09:42.375562] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.109 [2024-05-15 01:09:42.375725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.109 [2024-05-15 01:09:42.375751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.109 [2024-05-15 01:09:42.375766] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.109 [2024-05-15 01:09:42.375778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.109 [2024-05-15 01:09:42.375806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.109 qpair failed and we were unable to recover it. 00:22:30.109 [2024-05-15 01:09:42.385594] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.109 [2024-05-15 01:09:42.385759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.109 [2024-05-15 01:09:42.385784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.109 [2024-05-15 01:09:42.385798] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.109 [2024-05-15 01:09:42.385810] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.109 [2024-05-15 01:09:42.385837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.109 qpair failed and we were unable to recover it. 00:22:30.109 [2024-05-15 01:09:42.395609] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.109 [2024-05-15 01:09:42.395787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.109 [2024-05-15 01:09:42.395813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.109 [2024-05-15 01:09:42.395827] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.109 [2024-05-15 01:09:42.395839] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.109 [2024-05-15 01:09:42.395867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.109 qpair failed and we were unable to recover it. 00:22:30.109 [2024-05-15 01:09:42.405653] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.109 [2024-05-15 01:09:42.405816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.109 [2024-05-15 01:09:42.405847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.109 [2024-05-15 01:09:42.405862] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.109 [2024-05-15 01:09:42.405874] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.109 [2024-05-15 01:09:42.405903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.109 qpair failed and we were unable to recover it. 00:22:30.109 [2024-05-15 01:09:42.415639] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.109 [2024-05-15 01:09:42.415809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.109 [2024-05-15 01:09:42.415834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.109 [2024-05-15 01:09:42.415849] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.109 [2024-05-15 01:09:42.415861] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.110 [2024-05-15 01:09:42.415888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.110 qpair failed and we were unable to recover it. 00:22:30.110 [2024-05-15 01:09:42.425686] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.110 [2024-05-15 01:09:42.425849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.110 [2024-05-15 01:09:42.425874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.110 [2024-05-15 01:09:42.425889] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.110 [2024-05-15 01:09:42.425901] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.110 [2024-05-15 01:09:42.425935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.110 qpair failed and we were unable to recover it. 00:22:30.110 [2024-05-15 01:09:42.435756] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.110 [2024-05-15 01:09:42.435966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.110 [2024-05-15 01:09:42.435992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.110 [2024-05-15 01:09:42.436006] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.110 [2024-05-15 01:09:42.436019] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.110 [2024-05-15 01:09:42.436046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.110 qpair failed and we were unable to recover it. 00:22:30.110 [2024-05-15 01:09:42.445730] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.110 [2024-05-15 01:09:42.445968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.110 [2024-05-15 01:09:42.445993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.110 [2024-05-15 01:09:42.446008] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.110 [2024-05-15 01:09:42.446020] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.110 [2024-05-15 01:09:42.446053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.110 qpair failed and we were unable to recover it. 00:22:30.110 [2024-05-15 01:09:42.455755] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.110 [2024-05-15 01:09:42.455920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.110 [2024-05-15 01:09:42.455951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.110 [2024-05-15 01:09:42.455967] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.110 [2024-05-15 01:09:42.455979] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.110 [2024-05-15 01:09:42.456006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.110 qpair failed and we were unable to recover it. 00:22:30.110 [2024-05-15 01:09:42.465817] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.110 [2024-05-15 01:09:42.465995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.110 [2024-05-15 01:09:42.466020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.110 [2024-05-15 01:09:42.466035] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.110 [2024-05-15 01:09:42.466047] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.110 [2024-05-15 01:09:42.466075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.110 qpair failed and we were unable to recover it. 00:22:30.110 [2024-05-15 01:09:42.475817] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.110 [2024-05-15 01:09:42.475987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.110 [2024-05-15 01:09:42.476011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.110 [2024-05-15 01:09:42.476026] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.110 [2024-05-15 01:09:42.476038] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.110 [2024-05-15 01:09:42.476066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.110 qpair failed and we were unable to recover it. 00:22:30.110 [2024-05-15 01:09:42.485869] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.110 [2024-05-15 01:09:42.486047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.110 [2024-05-15 01:09:42.486074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.110 [2024-05-15 01:09:42.486093] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.110 [2024-05-15 01:09:42.486107] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.110 [2024-05-15 01:09:42.486136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.110 qpair failed and we were unable to recover it. 00:22:30.110 [2024-05-15 01:09:42.495866] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.110 [2024-05-15 01:09:42.496046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.110 [2024-05-15 01:09:42.496078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.110 [2024-05-15 01:09:42.496102] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.110 [2024-05-15 01:09:42.496115] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.110 [2024-05-15 01:09:42.496144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.110 qpair failed and we were unable to recover it. 00:22:30.370 [2024-05-15 01:09:42.505915] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.370 [2024-05-15 01:09:42.506096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.370 [2024-05-15 01:09:42.506122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.370 [2024-05-15 01:09:42.506137] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.370 [2024-05-15 01:09:42.506149] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.370 [2024-05-15 01:09:42.506178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.370 qpair failed and we were unable to recover it. 00:22:30.370 [2024-05-15 01:09:42.515944] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.370 [2024-05-15 01:09:42.516135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.370 [2024-05-15 01:09:42.516160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.370 [2024-05-15 01:09:42.516175] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.370 [2024-05-15 01:09:42.516187] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.370 [2024-05-15 01:09:42.516215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.370 qpair failed and we were unable to recover it. 00:22:30.370 [2024-05-15 01:09:42.526015] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.370 [2024-05-15 01:09:42.526216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.370 [2024-05-15 01:09:42.526241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.370 [2024-05-15 01:09:42.526256] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.370 [2024-05-15 01:09:42.526268] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.370 [2024-05-15 01:09:42.526295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.370 qpair failed and we were unable to recover it. 00:22:30.370 [2024-05-15 01:09:42.536003] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.370 [2024-05-15 01:09:42.536160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.370 [2024-05-15 01:09:42.536185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.370 [2024-05-15 01:09:42.536200] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.370 [2024-05-15 01:09:42.536212] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.370 [2024-05-15 01:09:42.536245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.370 qpair failed and we were unable to recover it. 00:22:30.370 [2024-05-15 01:09:42.546028] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.370 [2024-05-15 01:09:42.546192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.370 [2024-05-15 01:09:42.546217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.370 [2024-05-15 01:09:42.546232] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.370 [2024-05-15 01:09:42.546244] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.370 [2024-05-15 01:09:42.546272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.370 qpair failed and we were unable to recover it. 00:22:30.370 [2024-05-15 01:09:42.556054] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.370 [2024-05-15 01:09:42.556253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.370 [2024-05-15 01:09:42.556278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.370 [2024-05-15 01:09:42.556294] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.370 [2024-05-15 01:09:42.556306] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.370 [2024-05-15 01:09:42.556333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.370 qpair failed and we were unable to recover it. 00:22:30.370 [2024-05-15 01:09:42.566122] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.370 [2024-05-15 01:09:42.566319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.370 [2024-05-15 01:09:42.566346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.370 [2024-05-15 01:09:42.566361] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.370 [2024-05-15 01:09:42.566377] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.370 [2024-05-15 01:09:42.566407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.370 qpair failed and we were unable to recover it. 00:22:30.370 [2024-05-15 01:09:42.576134] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.370 [2024-05-15 01:09:42.576344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.370 [2024-05-15 01:09:42.576369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.370 [2024-05-15 01:09:42.576385] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.370 [2024-05-15 01:09:42.576397] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.370 [2024-05-15 01:09:42.576424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.370 qpair failed and we were unable to recover it. 00:22:30.370 [2024-05-15 01:09:42.586200] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.370 [2024-05-15 01:09:42.586411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.370 [2024-05-15 01:09:42.586441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.370 [2024-05-15 01:09:42.586457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.370 [2024-05-15 01:09:42.586469] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.370 [2024-05-15 01:09:42.586497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.370 qpair failed and we were unable to recover it. 00:22:30.370 [2024-05-15 01:09:42.596180] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.370 [2024-05-15 01:09:42.596336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.370 [2024-05-15 01:09:42.596361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.370 [2024-05-15 01:09:42.596376] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.370 [2024-05-15 01:09:42.596389] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.370 [2024-05-15 01:09:42.596416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.370 qpair failed and we were unable to recover it. 00:22:30.370 [2024-05-15 01:09:42.606178] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.370 [2024-05-15 01:09:42.606359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.370 [2024-05-15 01:09:42.606384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.370 [2024-05-15 01:09:42.606399] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.370 [2024-05-15 01:09:42.606411] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.370 [2024-05-15 01:09:42.606438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.370 qpair failed and we were unable to recover it. 00:22:30.370 [2024-05-15 01:09:42.616233] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.371 [2024-05-15 01:09:42.616393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.371 [2024-05-15 01:09:42.616418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.371 [2024-05-15 01:09:42.616432] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.371 [2024-05-15 01:09:42.616445] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.371 [2024-05-15 01:09:42.616473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.371 qpair failed and we were unable to recover it. 00:22:30.371 [2024-05-15 01:09:42.626265] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.371 [2024-05-15 01:09:42.626472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.371 [2024-05-15 01:09:42.626497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.371 [2024-05-15 01:09:42.626511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.371 [2024-05-15 01:09:42.626529] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.371 [2024-05-15 01:09:42.626557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.371 qpair failed and we were unable to recover it. 00:22:30.371 [2024-05-15 01:09:42.636275] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.371 [2024-05-15 01:09:42.636440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.371 [2024-05-15 01:09:42.636465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.371 [2024-05-15 01:09:42.636479] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.371 [2024-05-15 01:09:42.636491] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.371 [2024-05-15 01:09:42.636519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.371 qpair failed and we were unable to recover it. 00:22:30.371 [2024-05-15 01:09:42.646325] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.371 [2024-05-15 01:09:42.646516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.371 [2024-05-15 01:09:42.646541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.371 [2024-05-15 01:09:42.646556] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.371 [2024-05-15 01:09:42.646568] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.371 [2024-05-15 01:09:42.646595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.371 qpair failed and we were unable to recover it. 00:22:30.371 [2024-05-15 01:09:42.656324] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.371 [2024-05-15 01:09:42.656484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.371 [2024-05-15 01:09:42.656509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.371 [2024-05-15 01:09:42.656523] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.371 [2024-05-15 01:09:42.656536] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.371 [2024-05-15 01:09:42.656563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.371 qpair failed and we were unable to recover it. 00:22:30.371 [2024-05-15 01:09:42.666351] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.371 [2024-05-15 01:09:42.666514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.371 [2024-05-15 01:09:42.666539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.371 [2024-05-15 01:09:42.666553] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.371 [2024-05-15 01:09:42.666565] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.371 [2024-05-15 01:09:42.666592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.371 qpair failed and we were unable to recover it. 00:22:30.371 [2024-05-15 01:09:42.676411] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.371 [2024-05-15 01:09:42.676592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.371 [2024-05-15 01:09:42.676617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.371 [2024-05-15 01:09:42.676631] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.371 [2024-05-15 01:09:42.676644] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.371 [2024-05-15 01:09:42.676672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.371 qpair failed and we were unable to recover it. 00:22:30.371 [2024-05-15 01:09:42.686478] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.371 [2024-05-15 01:09:42.686637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.371 [2024-05-15 01:09:42.686662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.371 [2024-05-15 01:09:42.686678] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.371 [2024-05-15 01:09:42.686690] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.371 [2024-05-15 01:09:42.686718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.371 qpair failed and we were unable to recover it. 00:22:30.371 [2024-05-15 01:09:42.696428] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.371 [2024-05-15 01:09:42.696589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.371 [2024-05-15 01:09:42.696615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.371 [2024-05-15 01:09:42.696629] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.371 [2024-05-15 01:09:42.696641] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.371 [2024-05-15 01:09:42.696669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.371 qpair failed and we were unable to recover it. 00:22:30.371 [2024-05-15 01:09:42.706468] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.371 [2024-05-15 01:09:42.706634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.371 [2024-05-15 01:09:42.706659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.371 [2024-05-15 01:09:42.706674] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.371 [2024-05-15 01:09:42.706686] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.371 [2024-05-15 01:09:42.706714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.371 qpair failed and we were unable to recover it. 00:22:30.371 [2024-05-15 01:09:42.716522] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.371 [2024-05-15 01:09:42.716684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.371 [2024-05-15 01:09:42.716709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.371 [2024-05-15 01:09:42.716723] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.371 [2024-05-15 01:09:42.716740] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.371 [2024-05-15 01:09:42.716769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.371 qpair failed and we were unable to recover it. 00:22:30.371 [2024-05-15 01:09:42.726552] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.371 [2024-05-15 01:09:42.726713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.371 [2024-05-15 01:09:42.726738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.371 [2024-05-15 01:09:42.726753] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.371 [2024-05-15 01:09:42.726765] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.371 [2024-05-15 01:09:42.726792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.371 qpair failed and we were unable to recover it. 00:22:30.371 [2024-05-15 01:09:42.736571] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.372 [2024-05-15 01:09:42.736747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.372 [2024-05-15 01:09:42.736774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.372 [2024-05-15 01:09:42.736792] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.372 [2024-05-15 01:09:42.736805] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.372 [2024-05-15 01:09:42.736834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.372 qpair failed and we were unable to recover it. 00:22:30.372 [2024-05-15 01:09:42.746623] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.372 [2024-05-15 01:09:42.746794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.372 [2024-05-15 01:09:42.746819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.372 [2024-05-15 01:09:42.746834] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.372 [2024-05-15 01:09:42.746847] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.372 [2024-05-15 01:09:42.746875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.372 qpair failed and we were unable to recover it. 00:22:30.372 [2024-05-15 01:09:42.756631] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.372 [2024-05-15 01:09:42.756798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.372 [2024-05-15 01:09:42.756823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.372 [2024-05-15 01:09:42.756838] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.372 [2024-05-15 01:09:42.756850] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.372 [2024-05-15 01:09:42.756877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.372 qpair failed and we were unable to recover it. 00:22:30.631 [2024-05-15 01:09:42.766671] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.631 [2024-05-15 01:09:42.766896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.631 [2024-05-15 01:09:42.766922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.631 [2024-05-15 01:09:42.766944] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.631 [2024-05-15 01:09:42.766957] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e75420 00:22:30.631 [2024-05-15 01:09:42.766985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:30.631 qpair failed and we were unable to recover it. 00:22:30.631 [2024-05-15 01:09:42.776721] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.631 [2024-05-15 01:09:42.776887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.631 [2024-05-15 01:09:42.776920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.631 [2024-05-15 01:09:42.776947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.631 [2024-05-15 01:09:42.776962] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63f0000b90 00:22:30.631 [2024-05-15 01:09:42.776994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:30.631 qpair failed and we were unable to recover it. 00:22:30.631 [2024-05-15 01:09:42.786803] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:30.631 [2024-05-15 01:09:42.786982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:30.631 [2024-05-15 01:09:42.787010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:30.631 [2024-05-15 01:09:42.787025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:30.631 [2024-05-15 01:09:42.787038] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63f0000b90 00:22:30.631 [2024-05-15 01:09:42.787068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:30.631 qpair failed and we were unable to recover it. 00:22:30.631 [2024-05-15 01:09:42.787167] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:22:30.631 A controller has encountered a failure and is being reset. 00:22:30.631 qpair failed and we were unable to recover it. 00:22:30.631 [2024-05-15 01:09:42.787234] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e720b0 (9): Bad file descriptor 00:22:30.631 Controller properly reset. 00:22:30.631 Initializing NVMe Controllers 00:22:30.631 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:30.631 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:30.631 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:22:30.631 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:22:30.631 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:22:30.631 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:22:30.631 Initialization complete. Launching workers. 00:22:30.631 Starting thread on core 1 00:22:30.631 Starting thread on core 2 00:22:30.631 Starting thread on core 3 00:22:30.631 Starting thread on core 0 00:22:30.631 01:09:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@59 -- # sync 00:22:30.631 00:22:30.631 real 0m11.532s 00:22:30.631 user 0m20.427s 00:22:30.631 sys 0m5.229s 00:22:30.631 01:09:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:30.631 01:09:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:30.631 ************************************ 00:22:30.631 END TEST nvmf_target_disconnect_tc2 00:22:30.631 ************************************ 00:22:30.631 01:09:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:22:30.631 01:09:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:30.631 01:09:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@85 -- # nvmftestfini 00:22:30.631 01:09:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:30.631 01:09:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:22:30.631 01:09:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:30.631 01:09:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:22:30.631 01:09:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:30.631 01:09:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:30.631 rmmod nvme_tcp 00:22:30.631 rmmod nvme_fabrics 00:22:30.631 rmmod nvme_keyring 00:22:30.631 01:09:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:30.631 01:09:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:22:30.631 01:09:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:22:30.631 01:09:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1348224 ']' 00:22:30.631 01:09:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1348224 00:22:30.631 01:09:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 1348224 ']' 00:22:30.631 01:09:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 1348224 00:22:30.631 01:09:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:22:30.631 01:09:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:30.631 01:09:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1348224 00:22:30.631 01:09:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:22:30.631 01:09:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:22:30.631 01:09:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1348224' 00:22:30.631 killing process with pid 1348224 00:22:30.631 01:09:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 1348224 00:22:30.631 [2024-05-15 01:09:42.925188] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:30.631 01:09:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 1348224 00:22:30.891 01:09:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:30.891 01:09:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:30.891 01:09:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:30.891 01:09:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:30.891 01:09:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:30.891 01:09:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.891 01:09:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:30.891 01:09:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.425 01:09:45 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:33.425 00:22:33.425 real 0m16.934s 00:22:33.425 user 0m46.969s 00:22:33.425 sys 0m7.610s 00:22:33.425 01:09:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:33.425 01:09:45 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:33.425 ************************************ 00:22:33.425 END TEST nvmf_target_disconnect 00:22:33.425 ************************************ 00:22:33.425 01:09:45 nvmf_tcp -- nvmf/nvmf.sh@124 -- # timing_exit host 00:22:33.425 01:09:45 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:33.425 01:09:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:33.425 01:09:45 nvmf_tcp -- nvmf/nvmf.sh@126 -- # trap - SIGINT SIGTERM EXIT 00:22:33.425 00:22:33.425 real 16m48.144s 00:22:33.425 user 39m14.888s 00:22:33.425 sys 4m51.395s 00:22:33.425 01:09:45 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:33.425 01:09:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:33.425 ************************************ 00:22:33.425 END TEST nvmf_tcp 00:22:33.425 ************************************ 00:22:33.425 01:09:45 -- spdk/autotest.sh@284 -- # [[ 0 -eq 0 ]] 00:22:33.425 01:09:45 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:22:33.425 01:09:45 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:33.425 01:09:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:33.425 01:09:45 -- common/autotest_common.sh@10 -- # set +x 00:22:33.425 ************************************ 00:22:33.425 START TEST spdkcli_nvmf_tcp 00:22:33.425 ************************************ 00:22:33.425 01:09:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:22:33.425 * Looking for test storage... 00:22:33.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:22:33.425 01:09:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:22:33.425 01:09:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:22:33.425 01:09:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:22:33.425 01:09:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:33.425 01:09:45 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:22:33.425 01:09:45 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:33.425 01:09:45 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:33.425 01:09:45 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:33.425 01:09:45 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:33.425 01:09:45 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:33.425 01:09:45 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:33.425 01:09:45 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:33.425 01:09:45 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:33.425 01:09:45 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:33.425 01:09:45 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:33.425 01:09:45 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:33.425 01:09:45 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:33.425 01:09:45 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:33.425 01:09:45 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:33.425 01:09:45 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:33.425 01:09:45 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:33.425 01:09:45 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:33.425 01:09:45 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:33.425 01:09:45 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:33.425 01:09:45 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:33.425 01:09:45 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.425 01:09:45 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.426 01:09:45 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.426 01:09:45 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:22:33.426 01:09:45 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.426 01:09:45 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:22:33.426 01:09:45 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:33.426 01:09:45 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:33.426 01:09:45 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:33.426 01:09:45 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:33.426 01:09:45 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:33.426 01:09:45 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:33.426 01:09:45 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:33.426 01:09:45 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:33.426 01:09:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:22:33.426 01:09:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:22:33.426 01:09:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:22:33.426 01:09:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:22:33.426 01:09:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:33.426 01:09:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:33.426 01:09:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:22:33.426 01:09:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1349415 00:22:33.426 01:09:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:22:33.426 01:09:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1349415 00:22:33.426 01:09:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 1349415 ']' 00:22:33.426 01:09:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.426 01:09:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:33.426 01:09:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.426 01:09:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:33.426 01:09:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:33.426 [2024-05-15 01:09:45.476082] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:22:33.426 [2024-05-15 01:09:45.476160] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1349415 ] 00:22:33.426 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.426 [2024-05-15 01:09:45.544824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:33.426 [2024-05-15 01:09:45.658967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.426 [2024-05-15 01:09:45.658971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.357 01:09:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:34.357 01:09:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:22:34.357 01:09:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:22:34.357 01:09:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:34.357 01:09:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:34.357 01:09:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:22:34.357 01:09:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:22:34.357 01:09:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:22:34.357 01:09:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:34.357 01:09:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:34.357 01:09:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:22:34.357 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:22:34.357 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:22:34.357 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:22:34.357 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:22:34.357 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:22:34.357 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:22:34.357 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:22:34.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:22:34.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:22:34.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:22:34.357 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:34.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:22:34.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:22:34.357 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:34.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:22:34.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:22:34.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:22:34.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:22:34.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:34.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:22:34.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:22:34.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:22:34.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:22:34.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:34.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:22:34.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:22:34.357 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:22:34.357 ' 00:22:36.883 [2024-05-15 01:09:49.056339] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.257 [2024-05-15 01:09:50.284133] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:38.257 [2024-05-15 01:09:50.284824] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:22:40.783 [2024-05-15 01:09:52.579870] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:22:42.686 [2024-05-15 01:09:54.570357] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:22:44.063 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:22:44.063 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:22:44.063 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:22:44.063 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:22:44.063 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:22:44.063 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:22:44.063 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:22:44.063 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:22:44.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:22:44.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:22:44.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:22:44.063 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:44.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:22:44.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:22:44.063 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:44.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:22:44.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:22:44.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:22:44.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:22:44.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:44.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:22:44.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:22:44.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:22:44.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:22:44.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:44.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:22:44.063 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:22:44.063 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:22:44.063 01:09:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:22:44.063 01:09:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:44.063 01:09:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:44.063 01:09:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:22:44.063 01:09:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:44.063 01:09:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:44.063 01:09:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:22:44.063 01:09:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:22:44.322 01:09:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:22:44.322 01:09:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:22:44.322 01:09:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:22:44.322 01:09:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:44.322 01:09:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:44.322 01:09:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:22:44.322 01:09:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:44.322 01:09:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:44.322 01:09:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:22:44.322 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:22:44.322 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:22:44.322 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:22:44.322 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:22:44.322 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:22:44.322 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:22:44.322 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:22:44.322 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:22:44.322 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:22:44.322 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:22:44.322 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:22:44.322 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:22:44.322 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:22:44.322 ' 00:22:49.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:22:49.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:22:49.635 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:22:49.635 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:22:49.635 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:22:49.635 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:22:49.635 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:22:49.635 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:22:49.635 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:22:49.635 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:22:49.635 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:22:49.635 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:22:49.635 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:22:49.635 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:22:49.635 01:10:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:22:49.635 01:10:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:49.635 01:10:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:49.635 01:10:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1349415 00:22:49.635 01:10:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 1349415 ']' 00:22:49.635 01:10:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 1349415 00:22:49.635 01:10:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:22:49.635 01:10:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:49.635 01:10:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1349415 00:22:49.635 01:10:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:49.635 01:10:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:49.635 01:10:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1349415' 00:22:49.635 killing process with pid 1349415 00:22:49.635 01:10:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 1349415 00:22:49.635 [2024-05-15 01:10:01.953375] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:49.635 01:10:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 1349415 00:22:49.894 01:10:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:22:49.894 01:10:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:22:49.894 01:10:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1349415 ']' 00:22:49.894 01:10:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1349415 00:22:49.894 01:10:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 1349415 ']' 00:22:49.894 01:10:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 1349415 00:22:49.894 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1349415) - No such process 00:22:49.894 01:10:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 1349415 is not found' 00:22:49.894 Process with pid 1349415 is not found 00:22:49.894 01:10:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:22:49.894 01:10:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:22:49.894 01:10:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:22:49.894 00:22:49.894 real 0m16.869s 00:22:49.894 user 0m35.764s 00:22:49.894 sys 0m0.871s 00:22:49.894 01:10:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:49.894 01:10:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:49.894 ************************************ 00:22:49.894 END TEST spdkcli_nvmf_tcp 00:22:49.894 ************************************ 00:22:49.894 01:10:02 -- spdk/autotest.sh@286 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:22:49.894 01:10:02 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:49.894 01:10:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:49.894 01:10:02 -- common/autotest_common.sh@10 -- # set +x 00:22:49.894 ************************************ 00:22:49.894 START TEST nvmf_identify_passthru 00:22:49.894 ************************************ 00:22:49.894 01:10:02 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:22:50.152 * Looking for test storage... 00:22:50.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:50.152 01:10:02 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:50.152 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:22:50.152 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:50.152 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:50.152 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:50.152 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:50.152 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:50.152 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:50.152 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:50.152 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:50.152 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:50.152 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:50.152 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:50.152 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:50.153 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:50.153 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:50.153 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:50.153 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:50.153 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:50.153 01:10:02 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.153 01:10:02 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.153 01:10:02 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.153 01:10:02 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.153 01:10:02 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.153 01:10:02 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.153 01:10:02 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:22:50.153 01:10:02 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.153 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:22:50.153 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:50.153 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:50.153 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:50.153 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:50.153 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:50.153 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:50.153 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:50.153 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:50.153 01:10:02 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:50.153 01:10:02 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.153 01:10:02 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.153 01:10:02 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.153 01:10:02 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.153 01:10:02 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.153 01:10:02 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.153 01:10:02 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:22:50.153 01:10:02 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.153 01:10:02 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:22:50.153 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:50.153 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.153 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:50.153 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:50.153 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:50.153 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.153 01:10:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:50.153 01:10:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.153 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:50.153 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:50.153 01:10:02 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:22:50.153 01:10:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:52.712 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:52.712 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:52.712 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:52.712 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:52.713 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:52.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:52.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:22:52.713 00:22:52.713 --- 10.0.0.2 ping statistics --- 00:22:52.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.713 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:52.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:52.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:22:52.713 00:22:52.713 --- 10.0.0.1 ping statistics --- 00:22:52.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.713 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:52.713 01:10:04 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:52.713 01:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:22:52.713 01:10:04 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:52.713 01:10:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:52.713 01:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:22:52.713 01:10:04 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:22:52.713 01:10:04 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:22:52.713 01:10:04 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:22:52.713 01:10:04 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:22:52.713 01:10:04 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:22:52.713 01:10:04 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:22:52.713 01:10:04 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:52.713 01:10:04 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:52.713 01:10:04 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:22:52.713 01:10:04 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:22:52.713 01:10:04 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:22:52.713 01:10:04 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:88:00.0 00:22:52.713 01:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:22:52.713 01:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:22:52.713 01:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:22:52.713 01:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:22:52.713 01:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:22:52.713 EAL: No free 2048 kB hugepages reported on node 1 00:22:56.911 01:10:09 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:22:56.911 01:10:09 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:22:56.911 01:10:09 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:22:56.911 01:10:09 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:22:56.911 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.106 01:10:13 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:23:01.106 01:10:13 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:23:01.106 01:10:13 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:01.106 01:10:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:01.106 01:10:13 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:23:01.106 01:10:13 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:01.106 01:10:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:01.106 01:10:13 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1354471 00:23:01.106 01:10:13 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:01.106 01:10:13 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:01.106 01:10:13 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1354471 00:23:01.106 01:10:13 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 1354471 ']' 00:23:01.106 01:10:13 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.106 01:10:13 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:01.106 01:10:13 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.106 01:10:13 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:01.106 01:10:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:01.106 [2024-05-15 01:10:13.453803] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:23:01.106 [2024-05-15 01:10:13.453883] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.106 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.365 [2024-05-15 01:10:13.530277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:01.365 [2024-05-15 01:10:13.646447] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.365 [2024-05-15 01:10:13.646532] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.365 [2024-05-15 01:10:13.646546] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:01.365 [2024-05-15 01:10:13.646557] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:01.365 [2024-05-15 01:10:13.646566] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.365 [2024-05-15 01:10:13.646654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.365 [2024-05-15 01:10:13.646720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.365 [2024-05-15 01:10:13.646783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:01.365 [2024-05-15 01:10:13.646785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.303 01:10:14 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:02.303 01:10:14 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:23:02.303 01:10:14 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:23:02.303 01:10:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.303 01:10:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:02.303 INFO: Log level set to 20 00:23:02.303 INFO: Requests: 00:23:02.303 { 00:23:02.303 "jsonrpc": "2.0", 00:23:02.303 "method": "nvmf_set_config", 00:23:02.303 "id": 1, 00:23:02.303 "params": { 00:23:02.303 "admin_cmd_passthru": { 00:23:02.303 "identify_ctrlr": true 00:23:02.303 } 00:23:02.303 } 00:23:02.303 } 00:23:02.303 00:23:02.303 INFO: response: 00:23:02.303 { 00:23:02.303 "jsonrpc": "2.0", 00:23:02.303 "id": 1, 00:23:02.303 "result": true 00:23:02.303 } 00:23:02.303 00:23:02.303 01:10:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.303 01:10:14 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:23:02.303 01:10:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.303 01:10:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:02.303 INFO: Setting log level to 20 00:23:02.303 INFO: Setting log level to 20 00:23:02.303 INFO: Log level set to 20 00:23:02.303 INFO: Log level set to 20 00:23:02.303 INFO: Requests: 00:23:02.303 { 00:23:02.303 "jsonrpc": "2.0", 00:23:02.303 "method": "framework_start_init", 00:23:02.303 "id": 1 00:23:02.303 } 00:23:02.303 00:23:02.303 INFO: Requests: 00:23:02.303 { 00:23:02.303 "jsonrpc": "2.0", 00:23:02.303 "method": "framework_start_init", 00:23:02.303 "id": 1 00:23:02.303 } 00:23:02.303 00:23:02.303 [2024-05-15 01:10:14.562323] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:23:02.303 INFO: response: 00:23:02.303 { 00:23:02.303 "jsonrpc": "2.0", 00:23:02.303 "id": 1, 00:23:02.303 "result": true 00:23:02.303 } 00:23:02.303 00:23:02.303 INFO: response: 00:23:02.303 { 00:23:02.303 "jsonrpc": "2.0", 00:23:02.303 "id": 1, 00:23:02.303 "result": true 00:23:02.303 } 00:23:02.303 00:23:02.303 01:10:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.303 01:10:14 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:02.303 01:10:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.303 01:10:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:02.303 INFO: Setting log level to 40 00:23:02.303 INFO: Setting log level to 40 00:23:02.303 INFO: Setting log level to 40 00:23:02.303 [2024-05-15 01:10:14.572493] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.303 01:10:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.303 01:10:14 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:23:02.303 01:10:14 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:02.303 01:10:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:02.303 01:10:14 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:23:02.303 01:10:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.303 01:10:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:05.595 Nvme0n1 00:23:05.595 01:10:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.595 01:10:17 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:23:05.595 01:10:17 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.595 01:10:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:05.595 01:10:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.595 01:10:17 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:05.595 01:10:17 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.595 01:10:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:05.595 01:10:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.595 01:10:17 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:05.595 01:10:17 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.595 01:10:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:05.595 [2024-05-15 01:10:17.470844] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:05.595 [2024-05-15 01:10:17.471156] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:05.595 01:10:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.595 01:10:17 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:23:05.595 01:10:17 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.595 01:10:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:05.595 [ 00:23:05.595 { 00:23:05.595 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:05.595 "subtype": "Discovery", 00:23:05.595 "listen_addresses": [], 00:23:05.595 "allow_any_host": true, 00:23:05.595 "hosts": [] 00:23:05.595 }, 00:23:05.595 { 00:23:05.595 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.595 "subtype": "NVMe", 00:23:05.595 "listen_addresses": [ 00:23:05.595 { 00:23:05.595 "trtype": "TCP", 00:23:05.595 "adrfam": "IPv4", 00:23:05.595 "traddr": "10.0.0.2", 00:23:05.595 "trsvcid": "4420" 00:23:05.595 } 00:23:05.595 ], 00:23:05.595 "allow_any_host": true, 00:23:05.595 "hosts": [], 00:23:05.595 "serial_number": "SPDK00000000000001", 00:23:05.595 "model_number": "SPDK bdev Controller", 00:23:05.595 "max_namespaces": 1, 00:23:05.595 "min_cntlid": 1, 00:23:05.595 "max_cntlid": 65519, 00:23:05.595 "namespaces": [ 00:23:05.595 { 00:23:05.595 "nsid": 1, 00:23:05.595 "bdev_name": "Nvme0n1", 00:23:05.595 "name": "Nvme0n1", 00:23:05.595 "nguid": "77C8AD7F64F24FB9B8A52CC30BE41593", 00:23:05.595 "uuid": "77c8ad7f-64f2-4fb9-b8a5-2cc30be41593" 00:23:05.595 } 00:23:05.595 ] 00:23:05.595 } 00:23:05.595 ] 00:23:05.595 01:10:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.595 01:10:17 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:05.595 01:10:17 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:23:05.595 01:10:17 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:23:05.595 EAL: No free 2048 kB hugepages reported on node 1 00:23:05.595 01:10:17 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:23:05.595 01:10:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:05.595 01:10:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:23:05.595 01:10:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:23:05.595 EAL: No free 2048 kB hugepages reported on node 1 00:23:05.595 01:10:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:23:05.595 01:10:17 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:23:05.595 01:10:17 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:23:05.595 01:10:17 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:05.595 01:10:17 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.595 01:10:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:05.595 01:10:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.595 01:10:17 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:23:05.595 01:10:17 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:23:05.595 01:10:17 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:05.856 01:10:17 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:23:05.856 01:10:17 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:05.856 01:10:17 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:23:05.856 01:10:17 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:05.856 01:10:17 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:05.856 rmmod nvme_tcp 00:23:05.856 rmmod nvme_fabrics 00:23:05.856 rmmod nvme_keyring 00:23:05.856 01:10:18 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:05.856 01:10:18 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:23:05.856 01:10:18 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:23:05.856 01:10:18 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1354471 ']' 00:23:05.856 01:10:18 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1354471 00:23:05.856 01:10:18 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 1354471 ']' 00:23:05.856 01:10:18 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 1354471 00:23:05.856 01:10:18 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:23:05.856 01:10:18 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:05.856 01:10:18 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1354471 00:23:05.856 01:10:18 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:05.856 01:10:18 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:05.856 01:10:18 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1354471' 00:23:05.856 killing process with pid 1354471 00:23:05.856 01:10:18 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 1354471 00:23:05.856 [2024-05-15 01:10:18.060386] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:05.856 01:10:18 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 1354471 00:23:07.764 01:10:19 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:07.764 01:10:19 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:07.764 01:10:19 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:07.764 01:10:19 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:07.764 01:10:19 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:07.764 01:10:19 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.764 01:10:19 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:07.764 01:10:19 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.668 01:10:21 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:09.668 00:23:09.668 real 0m19.447s 00:23:09.668 user 0m30.721s 00:23:09.668 sys 0m2.771s 00:23:09.668 01:10:21 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:09.668 01:10:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:09.668 ************************************ 00:23:09.668 END TEST nvmf_identify_passthru 00:23:09.668 ************************************ 00:23:09.668 01:10:21 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:23:09.668 01:10:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:09.668 01:10:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:09.668 01:10:21 -- common/autotest_common.sh@10 -- # set +x 00:23:09.668 ************************************ 00:23:09.668 START TEST nvmf_dif 00:23:09.668 ************************************ 00:23:09.668 01:10:21 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:23:09.668 * Looking for test storage... 00:23:09.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:09.669 01:10:21 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:09.669 01:10:21 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:09.669 01:10:21 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:09.669 01:10:21 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:09.669 01:10:21 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.669 01:10:21 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.669 01:10:21 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.669 01:10:21 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:23:09.669 01:10:21 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:09.669 01:10:21 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:23:09.669 01:10:21 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:23:09.669 01:10:21 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:23:09.669 01:10:21 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:23:09.669 01:10:21 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.669 01:10:21 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:09.669 01:10:21 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:09.669 01:10:21 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:23:09.669 01:10:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:12.207 01:10:24 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:12.208 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:12.208 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:12.208 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:12.208 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:12.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:12.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:23:12.208 00:23:12.208 --- 10.0.0.2 ping statistics --- 00:23:12.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.208 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:23:12.208 01:10:24 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:12.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:12.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:23:12.209 00:23:12.209 --- 10.0.0.1 ping statistics --- 00:23:12.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.209 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:23:12.209 01:10:24 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:12.209 01:10:24 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:23:12.209 01:10:24 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:23:12.209 01:10:24 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:13.588 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:13.588 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:13.588 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:13.588 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:13.588 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:13.588 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:13.588 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:13.588 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:13.588 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:13.588 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:13.588 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:13.588 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:13.588 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:13.588 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:13.588 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:13.588 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:13.588 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:13.588 01:10:25 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:13.588 01:10:25 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:13.588 01:10:25 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:13.588 01:10:25 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:13.588 01:10:25 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:13.588 01:10:25 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:13.588 01:10:25 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:23:13.588 01:10:25 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:23:13.588 01:10:25 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:13.588 01:10:25 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:13.588 01:10:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:13.588 01:10:25 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1358251 00:23:13.588 01:10:25 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:13.588 01:10:25 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1358251 00:23:13.588 01:10:25 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 1358251 ']' 00:23:13.588 01:10:25 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.588 01:10:25 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:13.588 01:10:25 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.588 01:10:25 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:13.588 01:10:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:13.588 [2024-05-15 01:10:25.899719] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:23:13.588 [2024-05-15 01:10:25.899805] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.588 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.846 [2024-05-15 01:10:25.983112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.846 [2024-05-15 01:10:26.098126] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.846 [2024-05-15 01:10:26.098187] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.846 [2024-05-15 01:10:26.098204] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:13.846 [2024-05-15 01:10:26.098217] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:13.846 [2024-05-15 01:10:26.098228] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.846 [2024-05-15 01:10:26.098267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.782 01:10:26 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:14.782 01:10:26 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:23:14.782 01:10:26 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:14.783 01:10:26 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:14.783 01:10:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:14.783 01:10:26 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.783 01:10:26 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:23:14.783 01:10:26 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:23:14.783 01:10:26 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.783 01:10:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:14.783 [2024-05-15 01:10:26.913528] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.783 01:10:26 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.783 01:10:26 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:23:14.783 01:10:26 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:14.783 01:10:26 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:14.783 01:10:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:14.783 ************************************ 00:23:14.783 START TEST fio_dif_1_default 00:23:14.783 ************************************ 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:14.783 bdev_null0 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:14.783 [2024-05-15 01:10:26.977613] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:14.783 [2024-05-15 01:10:26.977864] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:14.783 { 00:23:14.783 "params": { 00:23:14.783 "name": "Nvme$subsystem", 00:23:14.783 "trtype": "$TEST_TRANSPORT", 00:23:14.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:14.783 "adrfam": "ipv4", 00:23:14.783 "trsvcid": "$NVMF_PORT", 00:23:14.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:14.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:14.783 "hdgst": ${hdgst:-false}, 00:23:14.783 "ddgst": ${ddgst:-false} 00:23:14.783 }, 00:23:14.783 "method": "bdev_nvme_attach_controller" 00:23:14.783 } 00:23:14.783 EOF 00:23:14.783 )") 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:23:14.783 01:10:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:14.783 "params": { 00:23:14.783 "name": "Nvme0", 00:23:14.783 "trtype": "tcp", 00:23:14.783 "traddr": "10.0.0.2", 00:23:14.783 "adrfam": "ipv4", 00:23:14.783 "trsvcid": "4420", 00:23:14.783 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:14.783 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:14.783 "hdgst": false, 00:23:14.783 "ddgst": false 00:23:14.783 }, 00:23:14.783 "method": "bdev_nvme_attach_controller" 00:23:14.783 }' 00:23:14.783 01:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:14.783 01:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:14.783 01:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:14.783 01:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:14.783 01:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:23:14.783 01:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:14.783 01:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:14.783 01:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:14.783 01:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:23:14.783 01:10:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:15.043 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:15.043 fio-3.35 00:23:15.043 Starting 1 thread 00:23:15.043 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.259 00:23:27.259 filename0: (groupid=0, jobs=1): err= 0: pid=1358497: Wed May 15 01:10:37 2024 00:23:27.259 read: IOPS=185, BW=741KiB/s (759kB/s)(7424KiB/10020msec) 00:23:27.259 slat (nsec): min=3584, max=36802, avg=11322.31, stdev=5407.79 00:23:27.259 clat (usec): min=942, max=44911, avg=21559.54, stdev=20407.34 00:23:27.259 lat (usec): min=950, max=44925, avg=21570.86, stdev=20407.69 00:23:27.259 clat percentiles (usec): 00:23:27.259 | 1.00th=[ 971], 5.00th=[ 1004], 10.00th=[ 1029], 20.00th=[ 1057], 00:23:27.259 | 30.00th=[ 1074], 40.00th=[ 1123], 50.00th=[41681], 60.00th=[41681], 00:23:27.259 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:23:27.259 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:23:27.259 | 99.99th=[44827] 00:23:27.259 bw ( KiB/s): min= 672, max= 768, per=99.88%, avg=740.80, stdev=34.86, samples=20 00:23:27.259 iops : min= 168, max= 192, avg=185.20, stdev= 8.72, samples=20 00:23:27.259 lat (usec) : 1000=4.53% 00:23:27.259 lat (msec) : 2=45.26%, 50=50.22% 00:23:27.259 cpu : usr=89.61%, sys=10.13%, ctx=18, majf=0, minf=230 00:23:27.259 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:27.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:27.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:27.259 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:27.259 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:27.259 00:23:27.259 Run status group 0 (all jobs): 00:23:27.259 READ: bw=741KiB/s (759kB/s), 741KiB/s-741KiB/s (759kB/s-759kB/s), io=7424KiB (7602kB), run=10020-10020msec 00:23:27.259 01:10:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:23:27.259 01:10:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:23:27.259 01:10:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:23:27.259 01:10:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:27.259 01:10:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:23:27.259 01:10:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:27.259 01:10:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.259 01:10:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:27.259 01:10:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.259 01:10:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:27.259 01:10:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.259 01:10:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:27.259 01:10:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.259 00:23:27.259 real 0m11.069s 00:23:27.259 user 0m10.070s 00:23:27.259 sys 0m1.291s 00:23:27.259 01:10:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:27.259 01:10:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:27.259 ************************************ 00:23:27.259 END TEST fio_dif_1_default 00:23:27.259 ************************************ 00:23:27.259 01:10:38 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:23:27.259 01:10:38 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:27.259 01:10:38 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:27.259 01:10:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:27.259 ************************************ 00:23:27.259 START TEST fio_dif_1_multi_subsystems 00:23:27.259 ************************************ 00:23:27.259 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:23:27.259 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:23:27.259 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:23:27.259 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:23:27.259 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:27.259 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:23:27.259 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:27.260 bdev_null0 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:27.260 [2024-05-15 01:10:38.104426] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:27.260 bdev_null1 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:27.260 { 00:23:27.260 "params": { 00:23:27.260 "name": "Nvme$subsystem", 00:23:27.260 "trtype": "$TEST_TRANSPORT", 00:23:27.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.260 "adrfam": "ipv4", 00:23:27.260 "trsvcid": "$NVMF_PORT", 00:23:27.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.260 "hdgst": ${hdgst:-false}, 00:23:27.260 "ddgst": ${ddgst:-false} 00:23:27.260 }, 00:23:27.260 "method": "bdev_nvme_attach_controller" 00:23:27.260 } 00:23:27.260 EOF 00:23:27.260 )") 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:27.260 { 00:23:27.260 "params": { 00:23:27.260 "name": "Nvme$subsystem", 00:23:27.260 "trtype": "$TEST_TRANSPORT", 00:23:27.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.260 "adrfam": "ipv4", 00:23:27.260 "trsvcid": "$NVMF_PORT", 00:23:27.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.260 "hdgst": ${hdgst:-false}, 00:23:27.260 "ddgst": ${ddgst:-false} 00:23:27.260 }, 00:23:27.260 "method": "bdev_nvme_attach_controller" 00:23:27.260 } 00:23:27.260 EOF 00:23:27.260 )") 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:27.260 "params": { 00:23:27.260 "name": "Nvme0", 00:23:27.260 "trtype": "tcp", 00:23:27.260 "traddr": "10.0.0.2", 00:23:27.260 "adrfam": "ipv4", 00:23:27.260 "trsvcid": "4420", 00:23:27.260 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:27.260 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:27.260 "hdgst": false, 00:23:27.260 "ddgst": false 00:23:27.260 }, 00:23:27.260 "method": "bdev_nvme_attach_controller" 00:23:27.260 },{ 00:23:27.260 "params": { 00:23:27.260 "name": "Nvme1", 00:23:27.260 "trtype": "tcp", 00:23:27.260 "traddr": "10.0.0.2", 00:23:27.260 "adrfam": "ipv4", 00:23:27.260 "trsvcid": "4420", 00:23:27.260 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.260 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:27.260 "hdgst": false, 00:23:27.260 "ddgst": false 00:23:27.260 }, 00:23:27.260 "method": "bdev_nvme_attach_controller" 00:23:27.260 }' 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:23:27.260 01:10:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:27.260 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:27.260 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:27.260 fio-3.35 00:23:27.260 Starting 2 threads 00:23:27.260 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.231 00:23:37.231 filename0: (groupid=0, jobs=1): err= 0: pid=1359907: Wed May 15 01:10:49 2024 00:23:37.231 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10042msec) 00:23:37.231 slat (nsec): min=7221, max=36368, avg=12799.26, stdev=4277.30 00:23:37.231 clat (usec): min=41053, max=46842, avg=41978.94, stdev=351.62 00:23:37.231 lat (usec): min=41061, max=46879, avg=41991.74, stdev=351.98 00:23:37.231 clat percentiles (usec): 00:23:37.231 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:23:37.231 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:23:37.231 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:23:37.231 | 99.00th=[42730], 99.50th=[42730], 99.90th=[46924], 99.95th=[46924], 00:23:37.231 | 99.99th=[46924] 00:23:37.231 bw ( KiB/s): min= 352, max= 384, per=49.89%, avg=380.80, stdev= 9.85, samples=20 00:23:37.231 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:23:37.231 lat (msec) : 50=100.00% 00:23:37.231 cpu : usr=94.23%, sys=5.47%, ctx=33, majf=0, minf=130 00:23:37.231 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:37.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:37.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:37.231 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:37.231 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:37.231 filename1: (groupid=0, jobs=1): err= 0: pid=1359908: Wed May 15 01:10:49 2024 00:23:37.231 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10042msec) 00:23:37.231 slat (nsec): min=7125, max=51648, avg=12827.87, stdev=4479.60 00:23:37.231 clat (usec): min=41004, max=46834, avg=41979.33, stdev=342.64 00:23:37.231 lat (usec): min=41026, max=46870, avg=41992.16, stdev=342.90 00:23:37.231 clat percentiles (usec): 00:23:37.231 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:23:37.231 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:23:37.231 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:23:37.231 | 99.00th=[42730], 99.50th=[42730], 99.90th=[46924], 99.95th=[46924], 00:23:37.231 | 99.99th=[46924] 00:23:37.231 bw ( KiB/s): min= 352, max= 384, per=49.89%, avg=380.80, stdev= 9.85, samples=20 00:23:37.231 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:23:37.231 lat (msec) : 50=100.00% 00:23:37.231 cpu : usr=94.41%, sys=5.29%, ctx=16, majf=0, minf=167 00:23:37.231 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:37.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:37.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:37.231 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:37.231 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:37.231 00:23:37.231 Run status group 0 (all jobs): 00:23:37.231 READ: bw=762KiB/s (780kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=7648KiB (7832kB), run=10042-10042msec 00:23:37.490 01:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:23:37.490 01:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:23:37.490 01:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:37.490 01:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:37.490 01:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:23:37.490 01:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:37.490 01:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.490 01:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:37.490 01:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.490 01:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:37.490 01:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.490 01:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:37.490 01:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.490 01:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:37.490 01:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:37.491 01:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:23:37.491 01:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:37.491 01:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.491 01:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:37.491 01:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.491 01:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:37.491 01:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.491 01:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:37.491 01:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.491 00:23:37.491 real 0m11.676s 00:23:37.491 user 0m20.582s 00:23:37.491 sys 0m1.393s 00:23:37.491 01:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:37.491 01:10:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:37.491 ************************************ 00:23:37.491 END TEST fio_dif_1_multi_subsystems 00:23:37.491 ************************************ 00:23:37.491 01:10:49 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:23:37.491 01:10:49 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:37.491 01:10:49 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:37.491 01:10:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:37.491 ************************************ 00:23:37.491 START TEST fio_dif_rand_params 00:23:37.491 ************************************ 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:37.491 bdev_null0 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:37.491 [2024-05-15 01:10:49.839167] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:37.491 { 00:23:37.491 "params": { 00:23:37.491 "name": "Nvme$subsystem", 00:23:37.491 "trtype": "$TEST_TRANSPORT", 00:23:37.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.491 "adrfam": "ipv4", 00:23:37.491 "trsvcid": "$NVMF_PORT", 00:23:37.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.491 "hdgst": ${hdgst:-false}, 00:23:37.491 "ddgst": ${ddgst:-false} 00:23:37.491 }, 00:23:37.491 "method": "bdev_nvme_attach_controller" 00:23:37.491 } 00:23:37.491 EOF 00:23:37.491 )") 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:37.491 "params": { 00:23:37.491 "name": "Nvme0", 00:23:37.491 "trtype": "tcp", 00:23:37.491 "traddr": "10.0.0.2", 00:23:37.491 "adrfam": "ipv4", 00:23:37.491 "trsvcid": "4420", 00:23:37.491 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:37.491 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:37.491 "hdgst": false, 00:23:37.491 "ddgst": false 00:23:37.491 }, 00:23:37.491 "method": "bdev_nvme_attach_controller" 00:23:37.491 }' 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:23:37.491 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:37.751 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:37.751 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:37.751 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:23:37.751 01:10:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:37.751 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:37.751 ... 00:23:37.751 fio-3.35 00:23:37.751 Starting 3 threads 00:23:37.751 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.347 00:23:44.348 filename0: (groupid=0, jobs=1): err= 0: pid=1361308: Wed May 15 01:10:55 2024 00:23:44.348 read: IOPS=256, BW=32.0MiB/s (33.6MB/s)(160MiB/5006msec) 00:23:44.348 slat (nsec): min=5243, max=29721, avg=13348.16, stdev=2115.36 00:23:44.348 clat (usec): min=5420, max=89744, avg=11694.40, stdev=10931.79 00:23:44.348 lat (usec): min=5433, max=89758, avg=11707.75, stdev=10931.79 00:23:44.348 clat percentiles (usec): 00:23:44.348 | 1.00th=[ 5866], 5.00th=[ 6194], 10.00th=[ 6521], 20.00th=[ 6980], 00:23:44.348 | 30.00th=[ 7570], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9372], 00:23:44.348 | 70.00th=[ 9765], 80.00th=[10945], 90.00th=[12780], 95.00th=[49546], 00:23:44.348 | 99.00th=[52167], 99.50th=[53216], 99.90th=[54789], 99.95th=[89654], 00:23:44.348 | 99.99th=[89654] 00:23:44.348 bw ( KiB/s): min=26112, max=38144, per=47.45%, avg=32742.40, stdev=4559.47, samples=10 00:23:44.348 iops : min= 204, max= 298, avg=255.80, stdev=35.62, samples=10 00:23:44.348 lat (msec) : 10=72.54%, 20=20.51%, 50=2.96%, 100=3.98% 00:23:44.348 cpu : usr=92.07%, sys=7.29%, ctx=31, majf=0, minf=94 00:23:44.348 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:44.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.348 issued rwts: total=1282,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:44.348 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:44.348 filename0: (groupid=0, jobs=1): err= 0: pid=1361309: Wed May 15 01:10:55 2024 00:23:44.348 read: IOPS=216, BW=27.0MiB/s (28.3MB/s)(135MiB/5007msec) 00:23:44.348 slat (nsec): min=5175, max=35448, avg=14013.52, stdev=2897.55 00:23:44.348 clat (usec): min=5690, max=91500, avg=13848.81, stdev=13521.38 00:23:44.348 lat (usec): min=5703, max=91513, avg=13862.83, stdev=13521.40 00:23:44.348 clat percentiles (usec): 00:23:44.348 | 1.00th=[ 5997], 5.00th=[ 6521], 10.00th=[ 6718], 20.00th=[ 7308], 00:23:44.348 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9896], 00:23:44.348 | 70.00th=[10683], 80.00th=[11731], 90.00th=[49021], 95.00th=[50594], 00:23:44.348 | 99.00th=[52691], 99.50th=[53216], 99.90th=[91751], 99.95th=[91751], 00:23:44.348 | 99.99th=[91751] 00:23:44.348 bw ( KiB/s): min=22016, max=34048, per=40.06%, avg=27648.00, stdev=3512.17, samples=10 00:23:44.348 iops : min= 172, max= 266, avg=216.00, stdev=27.44, samples=10 00:23:44.348 lat (msec) : 10=62.05%, 20=26.78%, 50=3.88%, 100=7.29% 00:23:44.348 cpu : usr=91.23%, sys=7.77%, ctx=7, majf=0, minf=93 00:23:44.348 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:44.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.348 issued rwts: total=1083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:44.348 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:44.348 filename0: (groupid=0, jobs=1): err= 0: pid=1361310: Wed May 15 01:10:55 2024 00:23:44.348 read: IOPS=70, BW=9029KiB/s (9245kB/s)(44.5MiB/5047msec) 00:23:44.348 slat (nsec): min=5086, max=34770, avg=12801.55, stdev=2999.55 00:23:44.348 clat (msec): min=7, max=101, avg=42.38, stdev=21.96 00:23:44.348 lat (msec): min=7, max=101, avg=42.39, stdev=21.96 00:23:44.348 clat percentiles (msec): 00:23:44.348 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 12], 20.00th=[ 15], 00:23:44.348 | 30.00th=[ 17], 40.00th=[ 52], 50.00th=[ 54], 60.00th=[ 55], 00:23:44.348 | 70.00th=[ 56], 80.00th=[ 57], 90.00th=[ 58], 95.00th=[ 60], 00:23:44.348 | 99.00th=[ 100], 99.50th=[ 101], 99.90th=[ 103], 99.95th=[ 103], 00:23:44.348 | 99.99th=[ 103] 00:23:44.348 bw ( KiB/s): min= 6912, max=11264, per=13.10%, avg=9038.30, stdev=1515.67, samples=10 00:23:44.348 iops : min= 54, max= 88, avg=70.60, stdev=11.85, samples=10 00:23:44.348 lat (msec) : 10=5.34%, 20=28.09%, 50=0.56%, 100=65.45%, 250=0.56% 00:23:44.348 cpu : usr=93.40%, sys=6.20%, ctx=8, majf=0, minf=88 00:23:44.348 IO depths : 1=4.8%, 2=95.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:44.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.348 issued rwts: total=356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:44.348 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:44.348 00:23:44.348 Run status group 0 (all jobs): 00:23:44.348 READ: bw=67.4MiB/s (70.7MB/s), 9029KiB/s-32.0MiB/s (9245kB/s-33.6MB/s), io=340MiB (357MB), run=5006-5047msec 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.348 bdev_null0 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.348 [2024-05-15 01:10:56.068053] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.348 bdev_null1 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.348 bdev_null2 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.348 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:44.349 { 00:23:44.349 "params": { 00:23:44.349 "name": "Nvme$subsystem", 00:23:44.349 "trtype": "$TEST_TRANSPORT", 00:23:44.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.349 "adrfam": "ipv4", 00:23:44.349 "trsvcid": "$NVMF_PORT", 00:23:44.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.349 "hdgst": ${hdgst:-false}, 00:23:44.349 "ddgst": ${ddgst:-false} 00:23:44.349 }, 00:23:44.349 "method": "bdev_nvme_attach_controller" 00:23:44.349 } 00:23:44.349 EOF 00:23:44.349 )") 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:44.349 { 00:23:44.349 "params": { 00:23:44.349 "name": "Nvme$subsystem", 00:23:44.349 "trtype": "$TEST_TRANSPORT", 00:23:44.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.349 "adrfam": "ipv4", 00:23:44.349 "trsvcid": "$NVMF_PORT", 00:23:44.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.349 "hdgst": ${hdgst:-false}, 00:23:44.349 "ddgst": ${ddgst:-false} 00:23:44.349 }, 00:23:44.349 "method": "bdev_nvme_attach_controller" 00:23:44.349 } 00:23:44.349 EOF 00:23:44.349 )") 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:44.349 { 00:23:44.349 "params": { 00:23:44.349 "name": "Nvme$subsystem", 00:23:44.349 "trtype": "$TEST_TRANSPORT", 00:23:44.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.349 "adrfam": "ipv4", 00:23:44.349 "trsvcid": "$NVMF_PORT", 00:23:44.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.349 "hdgst": ${hdgst:-false}, 00:23:44.349 "ddgst": ${ddgst:-false} 00:23:44.349 }, 00:23:44.349 "method": "bdev_nvme_attach_controller" 00:23:44.349 } 00:23:44.349 EOF 00:23:44.349 )") 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:44.349 "params": { 00:23:44.349 "name": "Nvme0", 00:23:44.349 "trtype": "tcp", 00:23:44.349 "traddr": "10.0.0.2", 00:23:44.349 "adrfam": "ipv4", 00:23:44.349 "trsvcid": "4420", 00:23:44.349 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:44.349 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:44.349 "hdgst": false, 00:23:44.349 "ddgst": false 00:23:44.349 }, 00:23:44.349 "method": "bdev_nvme_attach_controller" 00:23:44.349 },{ 00:23:44.349 "params": { 00:23:44.349 "name": "Nvme1", 00:23:44.349 "trtype": "tcp", 00:23:44.349 "traddr": "10.0.0.2", 00:23:44.349 "adrfam": "ipv4", 00:23:44.349 "trsvcid": "4420", 00:23:44.349 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.349 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:44.349 "hdgst": false, 00:23:44.349 "ddgst": false 00:23:44.349 }, 00:23:44.349 "method": "bdev_nvme_attach_controller" 00:23:44.349 },{ 00:23:44.349 "params": { 00:23:44.349 "name": "Nvme2", 00:23:44.349 "trtype": "tcp", 00:23:44.349 "traddr": "10.0.0.2", 00:23:44.349 "adrfam": "ipv4", 00:23:44.349 "trsvcid": "4420", 00:23:44.349 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:44.349 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:44.349 "hdgst": false, 00:23:44.349 "ddgst": false 00:23:44.349 }, 00:23:44.349 "method": "bdev_nvme_attach_controller" 00:23:44.349 }' 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:23:44.349 01:10:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:44.349 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:44.349 ... 00:23:44.349 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:44.349 ... 00:23:44.349 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:44.349 ... 00:23:44.349 fio-3.35 00:23:44.349 Starting 24 threads 00:23:44.349 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.553 00:23:56.553 filename0: (groupid=0, jobs=1): err= 0: pid=1362180: Wed May 15 01:11:07 2024 00:23:56.553 read: IOPS=465, BW=1860KiB/s (1905kB/s)(18.2MiB/10003msec) 00:23:56.553 slat (usec): min=3, max=225, avg=24.04, stdev=16.76 00:23:56.553 clat (usec): min=8344, max=62462, avg=34206.13, stdev=5136.98 00:23:56.553 lat (usec): min=8367, max=62477, avg=34230.17, stdev=5137.28 00:23:56.553 clat percentiles (usec): 00:23:56.553 | 1.00th=[10028], 5.00th=[28181], 10.00th=[32375], 20.00th=[33817], 00:23:56.554 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34341], 60.00th=[34866], 00:23:56.554 | 70.00th=[34866], 80.00th=[35390], 90.00th=[35914], 95.00th=[38011], 00:23:56.554 | 99.00th=[58459], 99.50th=[58983], 99.90th=[62653], 99.95th=[62653], 00:23:56.554 | 99.99th=[62653] 00:23:56.554 bw ( KiB/s): min= 1712, max= 2176, per=4.22%, avg=1857.68, stdev=111.32, samples=19 00:23:56.554 iops : min= 428, max= 544, avg=464.42, stdev=27.83, samples=19 00:23:56.554 lat (msec) : 10=0.99%, 20=1.63%, 50=95.79%, 100=1.59% 00:23:56.554 cpu : usr=97.48%, sys=1.73%, ctx=98, majf=0, minf=56 00:23:56.554 IO depths : 1=3.6%, 2=8.9%, 4=22.0%, 8=56.2%, 16=9.4%, 32=0.0%, >=64=0.0% 00:23:56.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.554 complete : 0=0.0%, 4=93.7%, 8=0.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.554 issued rwts: total=4652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.554 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.554 filename0: (groupid=0, jobs=1): err= 0: pid=1362181: Wed May 15 01:11:07 2024 00:23:56.554 read: IOPS=460, BW=1842KiB/s (1886kB/s)(18.0MiB/10006msec) 00:23:56.554 slat (usec): min=8, max=161, avg=35.94, stdev=25.16 00:23:56.554 clat (usec): min=9269, max=65113, avg=34468.21, stdev=2953.78 00:23:56.554 lat (usec): min=9353, max=65240, avg=34504.15, stdev=2954.10 00:23:56.554 clat percentiles (usec): 00:23:56.554 | 1.00th=[24249], 5.00th=[32900], 10.00th=[33424], 20.00th=[33817], 00:23:56.554 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:23:56.554 | 70.00th=[34866], 80.00th=[34866], 90.00th=[35390], 95.00th=[36439], 00:23:56.554 | 99.00th=[43254], 99.50th=[49546], 99.90th=[61080], 99.95th=[61080], 00:23:56.554 | 99.99th=[65274] 00:23:56.554 bw ( KiB/s): min= 1776, max= 1920, per=4.17%, avg=1838.95, stdev=60.63, samples=19 00:23:56.554 iops : min= 444, max= 480, avg=459.74, stdev=15.16, samples=19 00:23:56.554 lat (msec) : 10=0.20%, 20=0.37%, 50=98.94%, 100=0.50% 00:23:56.554 cpu : usr=98.12%, sys=1.46%, ctx=15, majf=0, minf=37 00:23:56.554 IO depths : 1=2.3%, 2=8.2%, 4=24.2%, 8=55.2%, 16=10.2%, 32=0.0%, >=64=0.0% 00:23:56.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.554 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.554 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.554 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.554 filename0: (groupid=0, jobs=1): err= 0: pid=1362182: Wed May 15 01:11:07 2024 00:23:56.554 read: IOPS=460, BW=1842KiB/s (1886kB/s)(18.0MiB/10008msec) 00:23:56.554 slat (usec): min=8, max=111, avg=44.89, stdev=18.34 00:23:56.554 clat (usec): min=24043, max=46481, avg=34372.25, stdev=1453.20 00:23:56.554 lat (usec): min=24060, max=46510, avg=34417.13, stdev=1453.36 00:23:56.554 clat percentiles (usec): 00:23:56.554 | 1.00th=[28705], 5.00th=[32900], 10.00th=[33424], 20.00th=[33817], 00:23:56.554 | 30.00th=[33817], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:23:56.554 | 70.00th=[34866], 80.00th=[34866], 90.00th=[35390], 95.00th=[35914], 00:23:56.554 | 99.00th=[40109], 99.50th=[41681], 99.90th=[44303], 99.95th=[44827], 00:23:56.554 | 99.99th=[46400] 00:23:56.554 bw ( KiB/s): min= 1776, max= 1920, per=4.17%, avg=1838.95, stdev=60.16, samples=19 00:23:56.554 iops : min= 444, max= 480, avg=459.74, stdev=15.04, samples=19 00:23:56.554 lat (msec) : 50=100.00% 00:23:56.554 cpu : usr=97.28%, sys=1.81%, ctx=166, majf=0, minf=40 00:23:56.554 IO depths : 1=4.3%, 2=10.4%, 4=24.8%, 8=52.3%, 16=8.2%, 32=0.0%, >=64=0.0% 00:23:56.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.554 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.554 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.554 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.554 filename0: (groupid=0, jobs=1): err= 0: pid=1362183: Wed May 15 01:11:07 2024 00:23:56.554 read: IOPS=463, BW=1855KiB/s (1899kB/s)(18.2MiB/10034msec) 00:23:56.554 slat (usec): min=5, max=146, avg=44.34, stdev=31.75 00:23:56.554 clat (usec): min=3930, max=59020, avg=34073.30, stdev=4886.18 00:23:56.554 lat (usec): min=3940, max=59062, avg=34117.64, stdev=4886.16 00:23:56.554 clat percentiles (usec): 00:23:56.554 | 1.00th=[14091], 5.00th=[31851], 10.00th=[33162], 20.00th=[33817], 00:23:56.554 | 30.00th=[33817], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:23:56.554 | 70.00th=[34866], 80.00th=[34866], 90.00th=[35390], 95.00th=[36963], 00:23:56.554 | 99.00th=[53740], 99.50th=[54789], 99.90th=[58983], 99.95th=[58983], 00:23:56.554 | 99.99th=[58983] 00:23:56.554 bw ( KiB/s): min= 1792, max= 2048, per=4.22%, avg=1859.25, stdev=70.35, samples=20 00:23:56.554 iops : min= 448, max= 512, avg=464.80, stdev=17.58, samples=20 00:23:56.554 lat (msec) : 4=0.04%, 10=0.64%, 20=2.39%, 50=94.86%, 100=2.06% 00:23:56.554 cpu : usr=98.17%, sys=1.39%, ctx=23, majf=0, minf=37 00:23:56.554 IO depths : 1=3.6%, 2=9.4%, 4=23.6%, 8=54.4%, 16=9.0%, 32=0.0%, >=64=0.0% 00:23:56.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.554 complete : 0=0.0%, 4=94.0%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.554 issued rwts: total=4653,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.554 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.554 filename0: (groupid=0, jobs=1): err= 0: pid=1362184: Wed May 15 01:11:07 2024 00:23:56.554 read: IOPS=462, BW=1849KiB/s (1893kB/s)(18.1MiB/10016msec) 00:23:56.554 slat (usec): min=8, max=127, avg=32.36, stdev=19.49 00:23:56.554 clat (usec): min=12600, max=67558, avg=34435.40, stdev=4459.57 00:23:56.554 lat (usec): min=12611, max=67594, avg=34467.76, stdev=4460.99 00:23:56.554 clat percentiles (usec): 00:23:56.554 | 1.00th=[17957], 5.00th=[27657], 10.00th=[33162], 20.00th=[33817], 00:23:56.554 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34341], 60.00th=[34866], 00:23:56.554 | 70.00th=[34866], 80.00th=[35390], 90.00th=[35914], 95.00th=[39060], 00:23:56.554 | 99.00th=[54264], 99.50th=[57934], 99.90th=[67634], 99.95th=[67634], 00:23:56.554 | 99.99th=[67634] 00:23:56.554 bw ( KiB/s): min= 1664, max= 2064, per=4.18%, avg=1841.47, stdev=100.83, samples=19 00:23:56.554 iops : min= 416, max= 516, avg=460.37, stdev=25.21, samples=19 00:23:56.554 lat (msec) : 20=1.53%, 50=97.06%, 100=1.40% 00:23:56.554 cpu : usr=91.31%, sys=4.06%, ctx=119, majf=0, minf=63 00:23:56.554 IO depths : 1=0.6%, 2=1.1%, 4=7.7%, 8=78.0%, 16=12.6%, 32=0.0%, >=64=0.0% 00:23:56.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.554 complete : 0=0.0%, 4=89.5%, 8=5.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.554 issued rwts: total=4630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.554 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.554 filename0: (groupid=0, jobs=1): err= 0: pid=1362185: Wed May 15 01:11:07 2024 00:23:56.554 read: IOPS=456, BW=1825KiB/s (1869kB/s)(17.8MiB/10005msec) 00:23:56.554 slat (usec): min=8, max=165, avg=34.70, stdev=25.65 00:23:56.554 clat (usec): min=4959, max=64583, avg=34844.37, stdev=5455.08 00:23:56.554 lat (usec): min=4968, max=64635, avg=34879.07, stdev=5454.11 00:23:56.554 clat percentiles (usec): 00:23:56.554 | 1.00th=[15270], 5.00th=[30540], 10.00th=[33162], 20.00th=[33817], 00:23:56.554 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34341], 60.00th=[34866], 00:23:56.554 | 70.00th=[34866], 80.00th=[35390], 90.00th=[35914], 95.00th=[40633], 00:23:56.554 | 99.00th=[57934], 99.50th=[59507], 99.90th=[60556], 99.95th=[60556], 00:23:56.554 | 99.99th=[64750] 00:23:56.554 bw ( KiB/s): min= 1664, max= 1904, per=4.12%, avg=1815.58, stdev=63.41, samples=19 00:23:56.554 iops : min= 416, max= 476, avg=453.89, stdev=15.85, samples=19 00:23:56.554 lat (msec) : 10=0.46%, 20=1.36%, 50=94.81%, 100=3.37% 00:23:56.554 cpu : usr=98.13%, sys=1.37%, ctx=34, majf=0, minf=62 00:23:56.554 IO depths : 1=0.8%, 2=4.5%, 4=16.1%, 8=65.1%, 16=13.5%, 32=0.0%, >=64=0.0% 00:23:56.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.554 complete : 0=0.0%, 4=92.5%, 8=3.5%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.554 issued rwts: total=4566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.554 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.554 filename0: (groupid=0, jobs=1): err= 0: pid=1362186: Wed May 15 01:11:07 2024 00:23:56.554 read: IOPS=471, BW=1884KiB/s (1930kB/s)(18.4MiB/10023msec) 00:23:56.554 slat (usec): min=5, max=139, avg=31.92, stdev=28.61 00:23:56.554 clat (usec): min=6538, max=43690, avg=33683.58, stdev=3801.30 00:23:56.554 lat (usec): min=6547, max=43721, avg=33715.50, stdev=3802.86 00:23:56.554 clat percentiles (usec): 00:23:56.554 | 1.00th=[ 8586], 5.00th=[30016], 10.00th=[32637], 20.00th=[33424], 00:23:56.554 | 30.00th=[33817], 40.00th=[34341], 50.00th=[34341], 60.00th=[34866], 00:23:56.554 | 70.00th=[34866], 80.00th=[34866], 90.00th=[35390], 95.00th=[35914], 00:23:56.554 | 99.00th=[38536], 99.50th=[39584], 99.90th=[43779], 99.95th=[43779], 00:23:56.554 | 99.99th=[43779] 00:23:56.554 bw ( KiB/s): min= 1792, max= 2208, per=4.27%, avg=1882.20, stdev=125.10, samples=20 00:23:56.554 iops : min= 448, max= 552, avg=470.55, stdev=31.27, samples=20 00:23:56.554 lat (msec) : 10=1.04%, 20=1.08%, 50=97.88% 00:23:56.554 cpu : usr=96.95%, sys=1.80%, ctx=45, majf=0, minf=49 00:23:56.554 IO depths : 1=5.2%, 2=11.0%, 4=23.5%, 8=52.8%, 16=7.4%, 32=0.0%, >=64=0.0% 00:23:56.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.554 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.554 issued rwts: total=4722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.554 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.554 filename0: (groupid=0, jobs=1): err= 0: pid=1362187: Wed May 15 01:11:07 2024 00:23:56.554 read: IOPS=455, BW=1823KiB/s (1867kB/s)(17.8MiB/10008msec) 00:23:56.554 slat (usec): min=8, max=463, avg=46.47, stdev=29.34 00:23:56.554 clat (usec): min=12313, max=61814, avg=34870.27, stdev=4343.71 00:23:56.554 lat (usec): min=12325, max=61855, avg=34916.74, stdev=4345.31 00:23:56.554 clat percentiles (usec): 00:23:56.554 | 1.00th=[17957], 5.00th=[32375], 10.00th=[33424], 20.00th=[33817], 00:23:56.554 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34341], 60.00th=[34866], 00:23:56.554 | 70.00th=[34866], 80.00th=[35390], 90.00th=[36439], 95.00th=[40109], 00:23:56.554 | 99.00th=[52691], 99.50th=[57934], 99.90th=[61604], 99.95th=[61604], 00:23:56.554 | 99.99th=[61604] 00:23:56.554 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1828.63, stdev=59.71, samples=19 00:23:56.554 iops : min= 416, max= 480, avg=457.16, stdev=14.93, samples=19 00:23:56.554 lat (msec) : 20=1.49%, 50=96.21%, 100=2.30% 00:23:56.554 cpu : usr=91.67%, sys=3.95%, ctx=105, majf=0, minf=63 00:23:56.554 IO depths : 1=0.2%, 2=0.4%, 4=6.4%, 8=80.2%, 16=12.9%, 32=0.0%, >=64=0.0% 00:23:56.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.554 complete : 0=0.0%, 4=89.1%, 8=5.8%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.554 issued rwts: total=4561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.554 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.554 filename1: (groupid=0, jobs=1): err= 0: pid=1362188: Wed May 15 01:11:07 2024 00:23:56.554 read: IOPS=450, BW=1802KiB/s (1845kB/s)(17.6MiB/10005msec) 00:23:56.554 slat (usec): min=8, max=404, avg=38.71, stdev=22.99 00:23:56.554 clat (usec): min=11019, max=65401, avg=35205.18, stdev=6271.66 00:23:56.554 lat (usec): min=11038, max=65412, avg=35243.89, stdev=6269.92 00:23:56.554 clat percentiles (usec): 00:23:56.554 | 1.00th=[15139], 5.00th=[27395], 10.00th=[33162], 20.00th=[33817], 00:23:56.554 | 30.00th=[33817], 40.00th=[34341], 50.00th=[34341], 60.00th=[34866], 00:23:56.554 | 70.00th=[34866], 80.00th=[35390], 90.00th=[39584], 95.00th=[51119], 00:23:56.554 | 99.00th=[59507], 99.50th=[61080], 99.90th=[62653], 99.95th=[65274], 00:23:56.554 | 99.99th=[65274] 00:23:56.554 bw ( KiB/s): min= 1664, max= 1920, per=4.08%, avg=1796.42, stdev=81.49, samples=19 00:23:56.554 iops : min= 416, max= 480, avg=449.11, stdev=20.37, samples=19 00:23:56.554 lat (msec) : 20=2.80%, 50=91.92%, 100=5.28% 00:23:56.555 cpu : usr=94.08%, sys=3.12%, ctx=180, majf=0, minf=48 00:23:56.555 IO depths : 1=3.2%, 2=7.8%, 4=20.7%, 8=58.6%, 16=9.6%, 32=0.0%, >=64=0.0% 00:23:56.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.555 complete : 0=0.0%, 4=93.2%, 8=1.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.555 issued rwts: total=4507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.555 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.555 filename1: (groupid=0, jobs=1): err= 0: pid=1362189: Wed May 15 01:11:07 2024 00:23:56.555 read: IOPS=460, BW=1843KiB/s (1887kB/s)(18.0MiB/10002msec) 00:23:56.555 slat (usec): min=8, max=189, avg=28.55, stdev=17.97 00:23:56.555 clat (usec): min=26844, max=43181, avg=34471.69, stdev=1023.14 00:23:56.555 lat (usec): min=26860, max=43192, avg=34500.23, stdev=1021.28 00:23:56.555 clat percentiles (usec): 00:23:56.555 | 1.00th=[31851], 5.00th=[32900], 10.00th=[33424], 20.00th=[33817], 00:23:56.555 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:23:56.555 | 70.00th=[34866], 80.00th=[34866], 90.00th=[35390], 95.00th=[35914], 00:23:56.555 | 99.00th=[38011], 99.50th=[39060], 99.90th=[42206], 99.95th=[42206], 00:23:56.555 | 99.99th=[43254] 00:23:56.555 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1838.95, stdev=63.16, samples=19 00:23:56.555 iops : min= 448, max= 480, avg=459.74, stdev=15.79, samples=19 00:23:56.555 lat (msec) : 50=100.00% 00:23:56.555 cpu : usr=97.19%, sys=1.83%, ctx=118, majf=0, minf=46 00:23:56.555 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:23:56.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.555 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.555 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.555 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.555 filename1: (groupid=0, jobs=1): err= 0: pid=1362190: Wed May 15 01:11:07 2024 00:23:56.555 read: IOPS=462, BW=1849KiB/s (1893kB/s)(18.1MiB/10008msec) 00:23:56.555 slat (usec): min=6, max=143, avg=33.68, stdev=25.27 00:23:56.555 clat (usec): min=6745, max=62453, avg=34360.88, stdev=5697.59 00:23:56.555 lat (usec): min=6754, max=62476, avg=34394.56, stdev=5700.04 00:23:56.555 clat percentiles (usec): 00:23:56.555 | 1.00th=[ 9241], 5.00th=[26608], 10.00th=[31589], 20.00th=[33424], 00:23:56.555 | 30.00th=[33817], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:23:56.555 | 70.00th=[34866], 80.00th=[35390], 90.00th=[36963], 95.00th=[41681], 00:23:56.555 | 99.00th=[56886], 99.50th=[61604], 99.90th=[62653], 99.95th=[62653], 00:23:56.555 | 99.99th=[62653] 00:23:56.555 bw ( KiB/s): min= 1680, max= 2048, per=4.19%, avg=1846.32, stdev=94.98, samples=19 00:23:56.555 iops : min= 420, max= 512, avg=461.58, stdev=23.74, samples=19 00:23:56.555 lat (msec) : 10=1.02%, 20=1.58%, 50=95.46%, 100=1.95% 00:23:56.555 cpu : usr=98.08%, sys=1.48%, ctx=20, majf=0, minf=37 00:23:56.555 IO depths : 1=3.9%, 2=8.8%, 4=20.5%, 8=58.0%, 16=8.9%, 32=0.0%, >=64=0.0% 00:23:56.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.555 complete : 0=0.0%, 4=93.2%, 8=1.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.555 issued rwts: total=4625,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.555 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.555 filename1: (groupid=0, jobs=1): err= 0: pid=1362191: Wed May 15 01:11:07 2024 00:23:56.555 read: IOPS=468, BW=1876KiB/s (1921kB/s)(18.4MiB/10023msec) 00:23:56.555 slat (usec): min=8, max=156, avg=36.10, stdev=23.74 00:23:56.555 clat (usec): min=8107, max=64626, avg=33828.26, stdev=4667.66 00:23:56.555 lat (usec): min=8190, max=64660, avg=33864.36, stdev=4665.69 00:23:56.555 clat percentiles (usec): 00:23:56.555 | 1.00th=[16450], 5.00th=[26608], 10.00th=[31327], 20.00th=[33817], 00:23:56.555 | 30.00th=[33817], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:23:56.555 | 70.00th=[34866], 80.00th=[34866], 90.00th=[35914], 95.00th=[38011], 00:23:56.555 | 99.00th=[45351], 99.50th=[59507], 99.90th=[64750], 99.95th=[64750], 00:23:56.555 | 99.99th=[64750] 00:23:56.555 bw ( KiB/s): min= 1792, max= 2048, per=4.25%, avg=1873.40, stdev=92.45, samples=20 00:23:56.555 iops : min= 448, max= 512, avg=468.35, stdev=23.11, samples=20 00:23:56.555 lat (msec) : 10=0.53%, 20=2.17%, 50=96.43%, 100=0.87% 00:23:56.555 cpu : usr=97.82%, sys=1.70%, ctx=17, majf=0, minf=36 00:23:56.555 IO depths : 1=4.8%, 2=10.1%, 4=21.6%, 8=55.7%, 16=7.8%, 32=0.0%, >=64=0.0% 00:23:56.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.555 complete : 0=0.0%, 4=93.3%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.555 issued rwts: total=4700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.555 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.555 filename1: (groupid=0, jobs=1): err= 0: pid=1362192: Wed May 15 01:11:07 2024 00:23:56.555 read: IOPS=456, BW=1827KiB/s (1871kB/s)(17.9MiB/10016msec) 00:23:56.555 slat (usec): min=7, max=1320, avg=45.53, stdev=28.53 00:23:56.555 clat (usec): min=16962, max=82103, avg=34664.94, stdev=3620.93 00:23:56.555 lat (usec): min=16971, max=82124, avg=34710.48, stdev=3618.33 00:23:56.555 clat percentiles (usec): 00:23:56.555 | 1.00th=[27395], 5.00th=[32900], 10.00th=[33424], 20.00th=[33817], 00:23:56.555 | 30.00th=[33817], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:23:56.555 | 70.00th=[34866], 80.00th=[34866], 90.00th=[35390], 95.00th=[37487], 00:23:56.555 | 99.00th=[53216], 99.50th=[57934], 99.90th=[65274], 99.95th=[82314], 00:23:56.555 | 99.99th=[82314] 00:23:56.555 bw ( KiB/s): min= 1648, max= 1920, per=4.13%, avg=1818.95, stdev=83.66, samples=19 00:23:56.555 iops : min= 412, max= 480, avg=454.74, stdev=20.91, samples=19 00:23:56.555 lat (msec) : 20=0.39%, 50=98.34%, 100=1.27% 00:23:56.555 cpu : usr=95.23%, sys=2.55%, ctx=114, majf=0, minf=48 00:23:56.555 IO depths : 1=1.8%, 2=7.1%, 4=22.2%, 8=57.9%, 16=10.9%, 32=0.0%, >=64=0.0% 00:23:56.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.555 complete : 0=0.0%, 4=93.7%, 8=0.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.555 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.555 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.555 filename1: (groupid=0, jobs=1): err= 0: pid=1362193: Wed May 15 01:11:07 2024 00:23:56.555 read: IOPS=460, BW=1842KiB/s (1886kB/s)(18.0MiB/10005msec) 00:23:56.555 slat (usec): min=8, max=154, avg=45.42, stdev=19.88 00:23:56.555 clat (usec): min=9510, max=53147, avg=34335.90, stdev=2128.88 00:23:56.555 lat (usec): min=9519, max=53164, avg=34381.31, stdev=2129.57 00:23:56.555 clat percentiles (usec): 00:23:56.555 | 1.00th=[29230], 5.00th=[32900], 10.00th=[33424], 20.00th=[33817], 00:23:56.555 | 30.00th=[33817], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:23:56.555 | 70.00th=[34866], 80.00th=[34866], 90.00th=[35390], 95.00th=[35914], 00:23:56.555 | 99.00th=[39584], 99.50th=[41681], 99.90th=[52167], 99.95th=[52691], 00:23:56.555 | 99.99th=[53216] 00:23:56.555 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1832.42, stdev=69.00, samples=19 00:23:56.555 iops : min= 416, max= 480, avg=458.11, stdev=17.25, samples=19 00:23:56.555 lat (msec) : 10=0.04%, 20=0.35%, 50=99.13%, 100=0.48% 00:23:56.555 cpu : usr=98.31%, sys=1.27%, ctx=15, majf=0, minf=40 00:23:56.555 IO depths : 1=2.0%, 2=8.2%, 4=24.9%, 8=54.3%, 16=10.5%, 32=0.0%, >=64=0.0% 00:23:56.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.555 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.555 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.555 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.555 filename1: (groupid=0, jobs=1): err= 0: pid=1362194: Wed May 15 01:11:07 2024 00:23:56.555 read: IOPS=454, BW=1819KiB/s (1863kB/s)(17.8MiB/10005msec) 00:23:56.555 slat (usec): min=8, max=151, avg=47.01, stdev=29.96 00:23:56.555 clat (usec): min=4648, max=61003, avg=34882.78, stdev=6745.41 00:23:56.555 lat (usec): min=4663, max=61113, avg=34929.79, stdev=6744.70 00:23:56.555 clat percentiles (usec): 00:23:56.555 | 1.00th=[14091], 5.00th=[21890], 10.00th=[31851], 20.00th=[33817], 00:23:56.555 | 30.00th=[33817], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:23:56.555 | 70.00th=[34866], 80.00th=[35390], 90.00th=[39584], 95.00th=[50070], 00:23:56.555 | 99.00th=[56886], 99.50th=[58983], 99.90th=[61080], 99.95th=[61080], 00:23:56.555 | 99.99th=[61080] 00:23:56.555 bw ( KiB/s): min= 1664, max= 1896, per=4.10%, avg=1807.58, stdev=58.51, samples=19 00:23:56.555 iops : min= 416, max= 474, avg=451.89, stdev=14.63, samples=19 00:23:56.555 lat (msec) : 10=0.44%, 20=3.16%, 50=91.21%, 100=5.19% 00:23:56.555 cpu : usr=97.94%, sys=1.60%, ctx=13, majf=0, minf=51 00:23:56.555 IO depths : 1=1.1%, 2=4.7%, 4=16.3%, 8=65.4%, 16=12.5%, 32=0.0%, >=64=0.0% 00:23:56.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.555 complete : 0=0.0%, 4=92.0%, 8=3.4%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.555 issued rwts: total=4551,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.555 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.555 filename1: (groupid=0, jobs=1): err= 0: pid=1362195: Wed May 15 01:11:07 2024 00:23:56.555 read: IOPS=460, BW=1842KiB/s (1886kB/s)(18.0MiB/10006msec) 00:23:56.555 slat (usec): min=8, max=130, avg=45.98, stdev=17.66 00:23:56.555 clat (usec): min=15789, max=53253, avg=34354.69, stdev=1541.67 00:23:56.555 lat (usec): min=15799, max=53272, avg=34400.67, stdev=1542.26 00:23:56.555 clat percentiles (usec): 00:23:56.555 | 1.00th=[28967], 5.00th=[32900], 10.00th=[33424], 20.00th=[33817], 00:23:56.555 | 30.00th=[33817], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:23:56.555 | 70.00th=[34866], 80.00th=[34866], 90.00th=[35390], 95.00th=[35914], 00:23:56.555 | 99.00th=[39584], 99.50th=[41157], 99.90th=[47449], 99.95th=[50070], 00:23:56.555 | 99.99th=[53216] 00:23:56.555 bw ( KiB/s): min= 1776, max= 1920, per=4.17%, avg=1838.11, stdev=62.34, samples=19 00:23:56.555 iops : min= 444, max= 480, avg=459.53, stdev=15.59, samples=19 00:23:56.555 lat (msec) : 20=0.09%, 50=99.87%, 100=0.04% 00:23:56.555 cpu : usr=97.93%, sys=1.65%, ctx=16, majf=0, minf=35 00:23:56.555 IO depths : 1=4.9%, 2=11.0%, 4=24.7%, 8=51.8%, 16=7.6%, 32=0.0%, >=64=0.0% 00:23:56.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.555 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.555 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.555 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.555 filename2: (groupid=0, jobs=1): err= 0: pid=1362196: Wed May 15 01:11:07 2024 00:23:56.555 read: IOPS=460, BW=1842KiB/s (1886kB/s)(18.0MiB/10006msec) 00:23:56.555 slat (usec): min=7, max=146, avg=48.16, stdev=21.92 00:23:56.555 clat (usec): min=21736, max=54100, avg=34309.93, stdev=1601.15 00:23:56.555 lat (usec): min=21747, max=54119, avg=34358.09, stdev=1599.91 00:23:56.555 clat percentiles (usec): 00:23:56.555 | 1.00th=[28443], 5.00th=[32900], 10.00th=[33424], 20.00th=[33817], 00:23:56.555 | 30.00th=[33817], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:23:56.555 | 70.00th=[34866], 80.00th=[34866], 90.00th=[35390], 95.00th=[35914], 00:23:56.555 | 99.00th=[40109], 99.50th=[42730], 99.90th=[44827], 99.95th=[52691], 00:23:56.555 | 99.99th=[54264] 00:23:56.555 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1838.95, stdev=63.16, samples=19 00:23:56.555 iops : min= 448, max= 480, avg=459.74, stdev=15.79, samples=19 00:23:56.555 lat (msec) : 50=99.91%, 100=0.09% 00:23:56.555 cpu : usr=97.69%, sys=1.79%, ctx=57, majf=0, minf=37 00:23:56.555 IO depths : 1=4.9%, 2=11.1%, 4=24.8%, 8=51.6%, 16=7.6%, 32=0.0%, >=64=0.0% 00:23:56.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.555 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.555 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.555 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.555 filename2: (groupid=0, jobs=1): err= 0: pid=1362197: Wed May 15 01:11:07 2024 00:23:56.555 read: IOPS=467, BW=1869KiB/s (1914kB/s)(18.3MiB/10025msec) 00:23:56.556 slat (usec): min=4, max=179, avg=32.14, stdev=26.68 00:23:56.556 clat (usec): min=8591, max=62012, avg=33954.15, stdev=4498.10 00:23:56.556 lat (usec): min=8628, max=62120, avg=33986.29, stdev=4500.30 00:23:56.556 clat percentiles (usec): 00:23:56.556 | 1.00th=[13566], 5.00th=[28181], 10.00th=[32900], 20.00th=[33817], 00:23:56.556 | 30.00th=[33817], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:23:56.556 | 70.00th=[34866], 80.00th=[34866], 90.00th=[35390], 95.00th=[36963], 00:23:56.556 | 99.00th=[52691], 99.50th=[52691], 99.90th=[53740], 99.95th=[53740], 00:23:56.556 | 99.99th=[62129] 00:23:56.556 bw ( KiB/s): min= 1792, max= 2096, per=4.24%, avg=1870.00, stdev=80.87, samples=20 00:23:56.556 iops : min= 448, max= 524, avg=467.50, stdev=20.22, samples=20 00:23:56.556 lat (msec) : 10=0.34%, 20=2.48%, 50=95.71%, 100=1.47% 00:23:56.556 cpu : usr=97.76%, sys=1.74%, ctx=21, majf=0, minf=44 00:23:56.556 IO depths : 1=4.9%, 2=10.6%, 4=23.2%, 8=53.6%, 16=7.7%, 32=0.0%, >=64=0.0% 00:23:56.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.556 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.556 issued rwts: total=4684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.556 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.556 filename2: (groupid=0, jobs=1): err= 0: pid=1362198: Wed May 15 01:11:07 2024 00:23:56.556 read: IOPS=460, BW=1842KiB/s (1886kB/s)(18.0MiB/10005msec) 00:23:56.556 slat (usec): min=9, max=174, avg=49.69, stdev=22.71 00:23:56.556 clat (usec): min=23986, max=45122, avg=34319.04, stdev=1277.33 00:23:56.556 lat (usec): min=24026, max=45171, avg=34368.73, stdev=1275.81 00:23:56.556 clat percentiles (usec): 00:23:56.556 | 1.00th=[29230], 5.00th=[32900], 10.00th=[33424], 20.00th=[33817], 00:23:56.556 | 30.00th=[33817], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:23:56.556 | 70.00th=[34866], 80.00th=[34866], 90.00th=[35390], 95.00th=[35914], 00:23:56.556 | 99.00th=[39584], 99.50th=[40633], 99.90th=[42206], 99.95th=[42206], 00:23:56.556 | 99.99th=[45351] 00:23:56.556 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1838.95, stdev=61.56, samples=19 00:23:56.556 iops : min= 448, max= 480, avg=459.74, stdev=15.39, samples=19 00:23:56.556 lat (msec) : 50=100.00% 00:23:56.556 cpu : usr=93.72%, sys=3.19%, ctx=82, majf=0, minf=36 00:23:56.556 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:23:56.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.556 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.556 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.556 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.556 filename2: (groupid=0, jobs=1): err= 0: pid=1362199: Wed May 15 01:11:07 2024 00:23:56.556 read: IOPS=458, BW=1835KiB/s (1879kB/s)(17.9MiB/10005msec) 00:23:56.556 slat (usec): min=8, max=132, avg=46.85, stdev=27.08 00:23:56.556 clat (usec): min=5190, max=87394, avg=34471.80, stdev=5657.11 00:23:56.556 lat (usec): min=5205, max=87424, avg=34518.64, stdev=5656.76 00:23:56.556 clat percentiles (usec): 00:23:56.556 | 1.00th=[15664], 5.00th=[31851], 10.00th=[33162], 20.00th=[33817], 00:23:56.556 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:23:56.556 | 70.00th=[34866], 80.00th=[34866], 90.00th=[35390], 95.00th=[37487], 00:23:56.556 | 99.00th=[58983], 99.50th=[60031], 99.90th=[87557], 99.95th=[87557], 00:23:56.556 | 99.99th=[87557] 00:23:56.556 bw ( KiB/s): min= 1532, max= 1920, per=4.14%, avg=1822.11, stdev=93.93, samples=19 00:23:56.556 iops : min= 383, max= 480, avg=455.53, stdev=23.48, samples=19 00:23:56.556 lat (msec) : 10=0.35%, 20=1.53%, 50=95.97%, 100=2.16% 00:23:56.556 cpu : usr=97.99%, sys=1.55%, ctx=23, majf=0, minf=40 00:23:56.556 IO depths : 1=3.7%, 2=8.5%, 4=19.8%, 8=58.2%, 16=9.9%, 32=0.0%, >=64=0.0% 00:23:56.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.556 complete : 0=0.0%, 4=93.0%, 8=2.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.556 issued rwts: total=4590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.556 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.556 filename2: (groupid=0, jobs=1): err= 0: pid=1362200: Wed May 15 01:11:07 2024 00:23:56.556 read: IOPS=460, BW=1842KiB/s (1886kB/s)(18.0MiB/10006msec) 00:23:56.556 slat (usec): min=8, max=102, avg=43.90, stdev=13.34 00:23:56.556 clat (usec): min=15976, max=56431, avg=34360.35, stdev=2149.33 00:23:56.556 lat (usec): min=16008, max=56468, avg=34404.25, stdev=2150.20 00:23:56.556 clat percentiles (usec): 00:23:56.556 | 1.00th=[27395], 5.00th=[32900], 10.00th=[33424], 20.00th=[33817], 00:23:56.556 | 30.00th=[33817], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:23:56.556 | 70.00th=[34866], 80.00th=[34866], 90.00th=[35390], 95.00th=[35914], 00:23:56.556 | 99.00th=[41157], 99.50th=[42206], 99.90th=[56361], 99.95th=[56361], 00:23:56.556 | 99.99th=[56361] 00:23:56.556 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1832.00, stdev=74.05, samples=19 00:23:56.556 iops : min= 416, max= 480, avg=458.00, stdev=18.51, samples=19 00:23:56.556 lat (msec) : 20=0.35%, 50=99.31%, 100=0.35% 00:23:56.556 cpu : usr=97.92%, sys=1.62%, ctx=15, majf=0, minf=38 00:23:56.556 IO depths : 1=5.4%, 2=11.5%, 4=24.8%, 8=51.2%, 16=7.1%, 32=0.0%, >=64=0.0% 00:23:56.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.556 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.556 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.556 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.556 filename2: (groupid=0, jobs=1): err= 0: pid=1362201: Wed May 15 01:11:07 2024 00:23:56.556 read: IOPS=451, BW=1806KiB/s (1850kB/s)(17.6MiB/10005msec) 00:23:56.556 slat (usec): min=8, max=168, avg=32.10, stdev=20.97 00:23:56.556 clat (usec): min=5959, max=86677, avg=35221.30, stdev=5097.70 00:23:56.556 lat (usec): min=5968, max=86712, avg=35253.39, stdev=5097.35 00:23:56.556 clat percentiles (usec): 00:23:56.556 | 1.00th=[19530], 5.00th=[32637], 10.00th=[33424], 20.00th=[33817], 00:23:56.556 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34341], 60.00th=[34866], 00:23:56.556 | 70.00th=[34866], 80.00th=[35390], 90.00th=[36963], 95.00th=[40633], 00:23:56.556 | 99.00th=[57410], 99.50th=[61080], 99.90th=[81265], 99.95th=[86508], 00:23:56.556 | 99.99th=[86508] 00:23:56.556 bw ( KiB/s): min= 1536, max= 1920, per=4.07%, avg=1794.53, stdev=109.59, samples=19 00:23:56.556 iops : min= 384, max= 480, avg=448.63, stdev=27.40, samples=19 00:23:56.556 lat (msec) : 10=0.04%, 20=0.97%, 50=96.30%, 100=2.68% 00:23:56.556 cpu : usr=95.61%, sys=2.42%, ctx=74, majf=0, minf=50 00:23:56.556 IO depths : 1=0.2%, 2=3.5%, 4=15.1%, 8=66.6%, 16=14.6%, 32=0.0%, >=64=0.0% 00:23:56.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.556 complete : 0=0.0%, 4=92.4%, 8=3.9%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.556 issued rwts: total=4518,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.556 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.556 filename2: (groupid=0, jobs=1): err= 0: pid=1362202: Wed May 15 01:11:07 2024 00:23:56.556 read: IOPS=458, BW=1835KiB/s (1879kB/s)(18.0MiB/10046msec) 00:23:56.556 slat (usec): min=8, max=121, avg=47.12, stdev=21.15 00:23:56.556 clat (usec): min=14398, max=58455, avg=34344.37, stdev=2530.36 00:23:56.556 lat (usec): min=14407, max=58485, avg=34391.49, stdev=2531.16 00:23:56.556 clat percentiles (usec): 00:23:56.556 | 1.00th=[26608], 5.00th=[32900], 10.00th=[33424], 20.00th=[33817], 00:23:56.556 | 30.00th=[33817], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:23:56.556 | 70.00th=[34866], 80.00th=[34866], 90.00th=[35390], 95.00th=[35914], 00:23:56.556 | 99.00th=[42206], 99.50th=[51643], 99.90th=[56361], 99.95th=[58459], 00:23:56.556 | 99.99th=[58459] 00:23:56.556 bw ( KiB/s): min= 1648, max= 1920, per=4.16%, avg=1832.00, stdev=71.70, samples=19 00:23:56.556 iops : min= 412, max= 480, avg=458.00, stdev=17.93, samples=19 00:23:56.556 lat (msec) : 20=0.43%, 50=99.05%, 100=0.52% 00:23:56.556 cpu : usr=97.27%, sys=1.83%, ctx=161, majf=0, minf=33 00:23:56.556 IO depths : 1=2.9%, 2=8.8%, 4=24.1%, 8=54.4%, 16=9.8%, 32=0.0%, >=64=0.0% 00:23:56.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.556 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.556 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.556 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.556 filename2: (groupid=0, jobs=1): err= 0: pid=1362203: Wed May 15 01:11:07 2024 00:23:56.556 read: IOPS=460, BW=1842KiB/s (1886kB/s)(18.0MiB/10005msec) 00:23:56.556 slat (usec): min=8, max=144, avg=49.16, stdev=21.17 00:23:56.556 clat (usec): min=24000, max=42371, avg=34311.46, stdev=995.46 00:23:56.556 lat (usec): min=24031, max=42444, avg=34360.62, stdev=995.33 00:23:56.556 clat percentiles (usec): 00:23:56.556 | 1.00th=[31851], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:23:56.556 | 30.00th=[33817], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:23:56.556 | 70.00th=[34866], 80.00th=[34866], 90.00th=[35390], 95.00th=[35390], 00:23:56.556 | 99.00th=[37487], 99.50th=[39584], 99.90th=[41157], 99.95th=[42206], 00:23:56.556 | 99.99th=[42206] 00:23:56.556 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1838.95, stdev=63.16, samples=19 00:23:56.556 iops : min= 448, max= 480, avg=459.74, stdev=15.79, samples=19 00:23:56.556 lat (msec) : 50=100.00% 00:23:56.556 cpu : usr=97.76%, sys=1.78%, ctx=14, majf=0, minf=39 00:23:56.556 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:23:56.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.556 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.556 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.556 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:56.556 00:23:56.556 Run status group 0 (all jobs): 00:23:56.556 READ: bw=43.0MiB/s (45.1MB/s), 1802KiB/s-1884KiB/s (1845kB/s-1930kB/s), io=432MiB (453MB), run=10002-10046msec 00:23:56.556 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:23:56.556 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:56.556 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:56.556 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:56.556 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:56.556 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:56.556 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.556 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.556 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.556 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:56.556 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.556 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.556 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.556 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:56.556 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:56.556 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:56.556 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:56.556 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.556 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.556 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.556 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:56.556 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.556 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.557 bdev_null0 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.557 [2024-05-15 01:11:07.963201] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.557 bdev_null1 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:56.557 { 00:23:56.557 "params": { 00:23:56.557 "name": "Nvme$subsystem", 00:23:56.557 "trtype": "$TEST_TRANSPORT", 00:23:56.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.557 "adrfam": "ipv4", 00:23:56.557 "trsvcid": "$NVMF_PORT", 00:23:56.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.557 "hdgst": ${hdgst:-false}, 00:23:56.557 "ddgst": ${ddgst:-false} 00:23:56.557 }, 00:23:56.557 "method": "bdev_nvme_attach_controller" 00:23:56.557 } 00:23:56.557 EOF 00:23:56.557 )") 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:56.557 01:11:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:56.557 { 00:23:56.557 "params": { 00:23:56.557 "name": "Nvme$subsystem", 00:23:56.557 "trtype": "$TEST_TRANSPORT", 00:23:56.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:56.557 "adrfam": "ipv4", 00:23:56.557 "trsvcid": "$NVMF_PORT", 00:23:56.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:56.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:56.557 "hdgst": ${hdgst:-false}, 00:23:56.557 "ddgst": ${ddgst:-false} 00:23:56.557 }, 00:23:56.557 "method": "bdev_nvme_attach_controller" 00:23:56.557 } 00:23:56.557 EOF 00:23:56.557 )") 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:56.557 "params": { 00:23:56.557 "name": "Nvme0", 00:23:56.557 "trtype": "tcp", 00:23:56.557 "traddr": "10.0.0.2", 00:23:56.557 "adrfam": "ipv4", 00:23:56.557 "trsvcid": "4420", 00:23:56.557 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:56.557 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:56.557 "hdgst": false, 00:23:56.557 "ddgst": false 00:23:56.557 }, 00:23:56.557 "method": "bdev_nvme_attach_controller" 00:23:56.557 },{ 00:23:56.557 "params": { 00:23:56.557 "name": "Nvme1", 00:23:56.557 "trtype": "tcp", 00:23:56.557 "traddr": "10.0.0.2", 00:23:56.557 "adrfam": "ipv4", 00:23:56.557 "trsvcid": "4420", 00:23:56.557 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.557 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:56.557 "hdgst": false, 00:23:56.557 "ddgst": false 00:23:56.557 }, 00:23:56.557 "method": "bdev_nvme_attach_controller" 00:23:56.557 }' 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:23:56.557 01:11:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:56.557 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:56.557 ... 00:23:56.558 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:56.558 ... 00:23:56.558 fio-3.35 00:23:56.558 Starting 4 threads 00:23:56.558 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.822 00:24:01.822 filename0: (groupid=0, jobs=1): err= 0: pid=1363582: Wed May 15 01:11:14 2024 00:24:01.822 read: IOPS=1932, BW=15.1MiB/s (15.8MB/s)(75.5MiB/5004msec) 00:24:01.822 slat (nsec): min=3816, max=52500, avg=14427.89, stdev=6432.88 00:24:01.822 clat (usec): min=1900, max=6857, avg=4097.74, stdev=568.11 00:24:01.822 lat (usec): min=1913, max=6891, avg=4112.17, stdev=568.20 00:24:01.822 clat percentiles (usec): 00:24:01.822 | 1.00th=[ 2868], 5.00th=[ 3294], 10.00th=[ 3490], 20.00th=[ 3752], 00:24:01.822 | 30.00th=[ 3851], 40.00th=[ 3982], 50.00th=[ 4047], 60.00th=[ 4113], 00:24:01.822 | 70.00th=[ 4178], 80.00th=[ 4293], 90.00th=[ 4817], 95.00th=[ 5276], 00:24:01.822 | 99.00th=[ 5997], 99.50th=[ 6390], 99.90th=[ 6587], 99.95th=[ 6718], 00:24:01.822 | 99.99th=[ 6849] 00:24:01.822 bw ( KiB/s): min=14928, max=15856, per=25.37%, avg=15462.40, stdev=305.19, samples=10 00:24:01.822 iops : min= 1866, max= 1982, avg=1932.80, stdev=38.15, samples=10 00:24:01.822 lat (msec) : 2=0.02%, 4=42.38%, 10=57.60% 00:24:01.822 cpu : usr=93.82%, sys=4.80%, ctx=157, majf=0, minf=0 00:24:01.822 IO depths : 1=0.1%, 2=2.3%, 4=66.4%, 8=31.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:01.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.822 complete : 0=0.0%, 4=95.1%, 8=4.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.822 issued rwts: total=9669,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.822 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:01.822 filename0: (groupid=0, jobs=1): err= 0: pid=1363583: Wed May 15 01:11:14 2024 00:24:01.822 read: IOPS=1950, BW=15.2MiB/s (16.0MB/s)(76.2MiB/5003msec) 00:24:01.822 slat (nsec): min=3890, max=51035, avg=12719.02, stdev=5672.23 00:24:01.822 clat (usec): min=1574, max=7301, avg=4060.56, stdev=651.15 00:24:01.822 lat (usec): min=1582, max=7313, avg=4073.28, stdev=651.19 00:24:01.822 clat percentiles (usec): 00:24:01.822 | 1.00th=[ 2606], 5.00th=[ 3097], 10.00th=[ 3326], 20.00th=[ 3654], 00:24:01.822 | 30.00th=[ 3818], 40.00th=[ 3949], 50.00th=[ 4047], 60.00th=[ 4113], 00:24:01.822 | 70.00th=[ 4178], 80.00th=[ 4293], 90.00th=[ 4817], 95.00th=[ 5407], 00:24:01.822 | 99.00th=[ 6128], 99.50th=[ 6325], 99.90th=[ 7046], 99.95th=[ 7046], 00:24:01.822 | 99.99th=[ 7308] 00:24:01.822 bw ( KiB/s): min=15040, max=16016, per=25.62%, avg=15611.20, stdev=314.99, samples=10 00:24:01.822 iops : min= 1880, max= 2002, avg=1951.40, stdev=39.37, samples=10 00:24:01.822 lat (msec) : 2=0.12%, 4=44.69%, 10=55.18% 00:24:01.822 cpu : usr=94.62%, sys=4.52%, ctx=42, majf=0, minf=0 00:24:01.822 IO depths : 1=0.2%, 2=3.3%, 4=68.7%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:01.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.822 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.822 issued rwts: total=9760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.822 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:01.822 filename1: (groupid=0, jobs=1): err= 0: pid=1363584: Wed May 15 01:11:14 2024 00:24:01.822 read: IOPS=1768, BW=13.8MiB/s (14.5MB/s)(69.1MiB/5003msec) 00:24:01.822 slat (nsec): min=6957, max=55318, avg=12297.58, stdev=6096.19 00:24:01.822 clat (usec): min=2668, max=10965, avg=4484.71, stdev=865.48 00:24:01.822 lat (usec): min=2675, max=11005, avg=4497.00, stdev=865.44 00:24:01.822 clat percentiles (usec): 00:24:01.822 | 1.00th=[ 3392], 5.00th=[ 3654], 10.00th=[ 3785], 20.00th=[ 3916], 00:24:01.822 | 30.00th=[ 4015], 40.00th=[ 4080], 50.00th=[ 4146], 60.00th=[ 4228], 00:24:01.822 | 70.00th=[ 4490], 80.00th=[ 5080], 90.00th=[ 5800], 95.00th=[ 6325], 00:24:01.822 | 99.00th=[ 7308], 99.50th=[ 7635], 99.90th=[ 8717], 99.95th=[ 8848], 00:24:01.822 | 99.99th=[10945] 00:24:01.822 bw ( KiB/s): min=13504, max=14976, per=23.21%, avg=14146.90, stdev=489.33, samples=10 00:24:01.822 iops : min= 1688, max= 1872, avg=1768.30, stdev=61.08, samples=10 00:24:01.822 lat (msec) : 4=27.23%, 10=72.74%, 20=0.03% 00:24:01.822 cpu : usr=95.72%, sys=3.82%, ctx=6, majf=0, minf=9 00:24:01.822 IO depths : 1=0.3%, 2=0.7%, 4=72.9%, 8=26.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:01.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.822 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.822 issued rwts: total=8848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.822 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:01.822 filename1: (groupid=0, jobs=1): err= 0: pid=1363585: Wed May 15 01:11:14 2024 00:24:01.822 read: IOPS=1966, BW=15.4MiB/s (16.1MB/s)(76.9MiB/5004msec) 00:24:01.822 slat (nsec): min=5499, max=55052, avg=16554.37, stdev=6570.30 00:24:01.822 clat (usec): min=1220, max=6915, avg=4019.50, stdev=581.62 00:24:01.822 lat (usec): min=1232, max=6930, avg=4036.06, stdev=581.58 00:24:01.822 clat percentiles (usec): 00:24:01.822 | 1.00th=[ 2737], 5.00th=[ 3163], 10.00th=[ 3392], 20.00th=[ 3654], 00:24:01.822 | 30.00th=[ 3818], 40.00th=[ 3916], 50.00th=[ 4015], 60.00th=[ 4080], 00:24:01.822 | 70.00th=[ 4146], 80.00th=[ 4228], 90.00th=[ 4686], 95.00th=[ 5145], 00:24:01.822 | 99.00th=[ 6063], 99.50th=[ 6325], 99.90th=[ 6718], 99.95th=[ 6849], 00:24:01.822 | 99.99th=[ 6915] 00:24:01.822 bw ( KiB/s): min=15296, max=16160, per=25.82%, avg=15737.60, stdev=302.38, samples=10 00:24:01.822 iops : min= 1912, max= 2020, avg=1967.20, stdev=37.80, samples=10 00:24:01.822 lat (msec) : 2=0.09%, 4=48.07%, 10=51.84% 00:24:01.822 cpu : usr=95.32%, sys=4.08%, ctx=7, majf=0, minf=9 00:24:01.822 IO depths : 1=0.2%, 2=2.2%, 4=66.7%, 8=31.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:01.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.822 complete : 0=0.0%, 4=95.3%, 8=4.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.822 issued rwts: total=9842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.823 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:01.823 00:24:01.823 Run status group 0 (all jobs): 00:24:01.823 READ: bw=59.5MiB/s (62.4MB/s), 13.8MiB/s-15.4MiB/s (14.5MB/s-16.1MB/s), io=298MiB (312MB), run=5003-5004msec 00:24:02.080 01:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:24:02.080 01:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:02.080 01:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:02.080 01:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:02.080 01:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:02.080 01:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:02.080 01:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.080 01:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.080 01:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.080 01:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:02.081 01:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.081 01:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.081 01:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.081 01:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:02.081 01:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:02.081 01:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:02.081 01:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:02.081 01:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.081 01:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.081 01:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.081 01:11:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:02.081 01:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.081 01:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.081 01:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.081 00:24:02.081 real 0m24.494s 00:24:02.081 user 4m30.033s 00:24:02.081 sys 0m7.767s 00:24:02.081 01:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:02.081 01:11:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:02.081 ************************************ 00:24:02.081 END TEST fio_dif_rand_params 00:24:02.081 ************************************ 00:24:02.081 01:11:14 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:24:02.081 01:11:14 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:02.081 01:11:14 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:02.081 01:11:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:02.081 ************************************ 00:24:02.081 START TEST fio_dif_digest 00:24:02.081 ************************************ 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:02.081 bdev_null0 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:02.081 [2024-05-15 01:11:14.383647] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:02.081 { 00:24:02.081 "params": { 00:24:02.081 "name": "Nvme$subsystem", 00:24:02.081 "trtype": "$TEST_TRANSPORT", 00:24:02.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.081 "adrfam": "ipv4", 00:24:02.081 "trsvcid": "$NVMF_PORT", 00:24:02.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.081 "hdgst": ${hdgst:-false}, 00:24:02.081 "ddgst": ${ddgst:-false} 00:24:02.081 }, 00:24:02.081 "method": "bdev_nvme_attach_controller" 00:24:02.081 } 00:24:02.081 EOF 00:24:02.081 )") 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:02.081 "params": { 00:24:02.081 "name": "Nvme0", 00:24:02.081 "trtype": "tcp", 00:24:02.081 "traddr": "10.0.0.2", 00:24:02.081 "adrfam": "ipv4", 00:24:02.081 "trsvcid": "4420", 00:24:02.081 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:02.081 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:02.081 "hdgst": true, 00:24:02.081 "ddgst": true 00:24:02.081 }, 00:24:02.081 "method": "bdev_nvme_attach_controller" 00:24:02.081 }' 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:24:02.081 01:11:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:02.338 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:02.338 ... 00:24:02.338 fio-3.35 00:24:02.338 Starting 3 threads 00:24:02.338 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.534 00:24:14.534 filename0: (groupid=0, jobs=1): err= 0: pid=1364451: Wed May 15 01:11:25 2024 00:24:14.534 read: IOPS=166, BW=20.8MiB/s (21.8MB/s)(209MiB/10051msec) 00:24:14.534 slat (nsec): min=7374, max=37635, avg=14438.34, stdev=3950.28 00:24:14.534 clat (usec): min=9875, max=61354, avg=18021.98, stdev=5765.34 00:24:14.534 lat (usec): min=9888, max=61374, avg=18036.42, stdev=5765.47 00:24:14.534 clat percentiles (usec): 00:24:14.534 | 1.00th=[10552], 5.00th=[13042], 10.00th=[14091], 20.00th=[15139], 00:24:14.534 | 30.00th=[16188], 40.00th=[17171], 50.00th=[17957], 60.00th=[18220], 00:24:14.534 | 70.00th=[18744], 80.00th=[19530], 90.00th=[20055], 95.00th=[21103], 00:24:14.534 | 99.00th=[57410], 99.50th=[59507], 99.90th=[61080], 99.95th=[61604], 00:24:14.534 | 99.99th=[61604] 00:24:14.534 bw ( KiB/s): min=18176, max=23808, per=41.29%, avg=21326.60, stdev=1843.28, samples=20 00:24:14.534 iops : min= 142, max= 186, avg=166.60, stdev=14.42, samples=20 00:24:14.534 lat (msec) : 10=0.24%, 20=88.32%, 50=9.71%, 100=1.74% 00:24:14.534 cpu : usr=92.71%, sys=6.69%, ctx=31, majf=0, minf=172 00:24:14.534 IO depths : 1=2.1%, 2=97.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:14.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.534 issued rwts: total=1669,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.534 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:14.534 filename0: (groupid=0, jobs=1): err= 0: pid=1364452: Wed May 15 01:11:25 2024 00:24:14.534 read: IOPS=111, BW=14.0MiB/s (14.7MB/s)(141MiB/10048msec) 00:24:14.534 slat (nsec): min=4726, max=90069, avg=14997.65, stdev=4693.93 00:24:14.534 clat (msec): min=8, max=107, avg=26.76, stdev=15.20 00:24:14.534 lat (msec): min=8, max=107, avg=26.78, stdev=15.20 00:24:14.534 clat percentiles (msec): 00:24:14.534 | 1.00th=[ 11], 5.00th=[ 14], 10.00th=[ 18], 20.00th=[ 21], 00:24:14.534 | 30.00th=[ 22], 40.00th=[ 22], 50.00th=[ 23], 60.00th=[ 24], 00:24:14.534 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 61], 95.00th=[ 64], 00:24:14.534 | 99.00th=[ 69], 99.50th=[ 103], 99.90th=[ 108], 99.95th=[ 108], 00:24:14.534 | 99.99th=[ 108] 00:24:14.534 bw ( KiB/s): min= 9728, max=20480, per=27.78%, avg=14348.80, stdev=2240.20, samples=20 00:24:14.534 iops : min= 76, max= 160, avg=112.10, stdev=17.50, samples=20 00:24:14.534 lat (msec) : 10=0.89%, 20=18.06%, 50=68.95%, 100=11.48%, 250=0.62% 00:24:14.534 cpu : usr=94.02%, sys=5.46%, ctx=38, majf=0, minf=231 00:24:14.534 IO depths : 1=1.2%, 2=98.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:14.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.534 issued rwts: total=1124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.534 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:14.534 filename0: (groupid=0, jobs=1): err= 0: pid=1364453: Wed May 15 01:11:25 2024 00:24:14.534 read: IOPS=125, BW=15.7MiB/s (16.5MB/s)(158MiB/10049msec) 00:24:14.534 slat (usec): min=5, max=101, avg=19.07, stdev= 6.76 00:24:14.534 clat (msec): min=7, max=101, avg=23.81, stdev=11.82 00:24:14.534 lat (msec): min=7, max=101, avg=23.83, stdev=11.82 00:24:14.534 clat percentiles (msec): 00:24:14.535 | 1.00th=[ 10], 5.00th=[ 15], 10.00th=[ 18], 20.00th=[ 19], 00:24:14.535 | 30.00th=[ 21], 40.00th=[ 21], 50.00th=[ 22], 60.00th=[ 23], 00:24:14.535 | 70.00th=[ 23], 80.00th=[ 24], 90.00th=[ 26], 95.00th=[ 61], 00:24:14.535 | 99.00th=[ 65], 99.50th=[ 66], 99.90th=[ 103], 99.95th=[ 103], 00:24:14.535 | 99.99th=[ 103] 00:24:14.535 bw ( KiB/s): min=12800, max=19968, per=31.25%, avg=16140.80, stdev=1909.37, samples=20 00:24:14.535 iops : min= 100, max= 156, avg=126.10, stdev=14.92, samples=20 00:24:14.535 lat (msec) : 10=1.50%, 20=27.79%, 50=63.18%, 100=7.36%, 250=0.16% 00:24:14.535 cpu : usr=93.64%, sys=5.68%, ctx=77, majf=0, minf=210 00:24:14.535 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:14.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.535 issued rwts: total=1263,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.535 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:14.535 00:24:14.535 Run status group 0 (all jobs): 00:24:14.535 READ: bw=50.4MiB/s (52.9MB/s), 14.0MiB/s-20.8MiB/s (14.7MB/s-21.8MB/s), io=507MiB (532MB), run=10048-10051msec 00:24:14.535 01:11:25 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:24:14.535 01:11:25 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:24:14.535 01:11:25 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:24:14.535 01:11:25 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:14.535 01:11:25 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:24:14.535 01:11:25 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:14.535 01:11:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.535 01:11:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:14.535 01:11:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.535 01:11:25 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:14.535 01:11:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.535 01:11:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:14.535 01:11:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.535 00:24:14.535 real 0m11.176s 00:24:14.535 user 0m29.148s 00:24:14.535 sys 0m2.105s 00:24:14.535 01:11:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:14.535 01:11:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:14.535 ************************************ 00:24:14.535 END TEST fio_dif_digest 00:24:14.535 ************************************ 00:24:14.535 01:11:25 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:24:14.535 01:11:25 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:24:14.535 01:11:25 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:14.535 01:11:25 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:24:14.535 01:11:25 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:14.535 01:11:25 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:24:14.535 01:11:25 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:14.535 01:11:25 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:14.535 rmmod nvme_tcp 00:24:14.535 rmmod nvme_fabrics 00:24:14.535 rmmod nvme_keyring 00:24:14.535 01:11:25 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:14.535 01:11:25 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:24:14.535 01:11:25 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:24:14.535 01:11:25 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1358251 ']' 00:24:14.535 01:11:25 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1358251 00:24:14.535 01:11:25 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 1358251 ']' 00:24:14.535 01:11:25 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 1358251 00:24:14.535 01:11:25 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:24:14.535 01:11:25 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:14.535 01:11:25 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1358251 00:24:14.535 01:11:25 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:14.535 01:11:25 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:14.535 01:11:25 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1358251' 00:24:14.535 killing process with pid 1358251 00:24:14.535 01:11:25 nvmf_dif -- common/autotest_common.sh@965 -- # kill 1358251 00:24:14.535 [2024-05-15 01:11:25.617581] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:14.535 01:11:25 nvmf_dif -- common/autotest_common.sh@970 -- # wait 1358251 00:24:14.535 01:11:25 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:24:14.535 01:11:25 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:14.793 Waiting for block devices as requested 00:24:14.793 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:24:15.050 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:15.050 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:15.050 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:15.050 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:15.326 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:15.326 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:15.326 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:15.326 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:15.592 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:15.592 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:15.592 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:15.592 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:15.851 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:15.851 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:15.851 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:15.851 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:16.109 01:11:28 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:16.109 01:11:28 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:16.109 01:11:28 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:16.109 01:11:28 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:16.109 01:11:28 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.109 01:11:28 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:16.109 01:11:28 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.011 01:11:30 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:18.011 00:24:18.011 real 1m8.573s 00:24:18.011 user 6m22.654s 00:24:18.011 sys 0m22.014s 00:24:18.011 01:11:30 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:18.011 01:11:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:18.011 ************************************ 00:24:18.011 END TEST nvmf_dif 00:24:18.011 ************************************ 00:24:18.011 01:11:30 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:18.011 01:11:30 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:18.011 01:11:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:18.011 01:11:30 -- common/autotest_common.sh@10 -- # set +x 00:24:18.270 ************************************ 00:24:18.270 START TEST nvmf_abort_qd_sizes 00:24:18.270 ************************************ 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:18.270 * Looking for test storage... 00:24:18.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:24:18.270 01:11:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:20.800 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:20.800 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:24:20.800 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:20.800 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:20.800 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:20.800 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:20.800 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:20.800 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:24:20.800 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:20.800 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:24:20.800 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:20.801 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:20.801 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:20.801 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:20.801 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:20.801 01:11:32 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:20.801 01:11:33 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:20.801 01:11:33 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:20.801 01:11:33 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:20.801 01:11:33 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:20.801 01:11:33 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:20.801 01:11:33 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:20.801 01:11:33 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:20.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:20.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:24:20.801 00:24:20.801 --- 10.0.0.2 ping statistics --- 00:24:20.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.801 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:24:20.801 01:11:33 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:20.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:20.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:24:20.801 00:24:20.801 --- 10.0.0.1 ping statistics --- 00:24:20.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.801 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:24:20.801 01:11:33 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:20.801 01:11:33 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:24:20.801 01:11:33 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:24:20.801 01:11:33 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:22.177 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:22.177 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:22.177 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:22.177 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:22.177 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:22.177 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:22.177 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:22.177 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:22.177 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:22.177 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:22.177 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:22.177 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:22.177 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:22.177 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:22.177 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:22.177 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:23.113 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:24:23.372 01:11:35 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:23.372 01:11:35 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:23.372 01:11:35 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:23.372 01:11:35 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:23.372 01:11:35 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:23.372 01:11:35 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:23.372 01:11:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:24:23.372 01:11:35 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:23.372 01:11:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:23.372 01:11:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:23.372 01:11:35 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1369855 00:24:23.372 01:11:35 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:24:23.372 01:11:35 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1369855 00:24:23.372 01:11:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 1369855 ']' 00:24:23.372 01:11:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.372 01:11:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:23.372 01:11:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.372 01:11:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:23.372 01:11:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:23.372 [2024-05-15 01:11:35.678986] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:24:23.372 [2024-05-15 01:11:35.679080] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.372 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.372 [2024-05-15 01:11:35.754365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:23.630 [2024-05-15 01:11:35.867319] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:23.630 [2024-05-15 01:11:35.867374] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:23.630 [2024-05-15 01:11:35.867403] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:23.630 [2024-05-15 01:11:35.867415] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:23.631 [2024-05-15 01:11:35.867425] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:23.631 [2024-05-15 01:11:35.867556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.631 [2024-05-15 01:11:35.867598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:23.631 [2024-05-15 01:11:35.867655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:23.631 [2024-05-15 01:11:35.867657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.631 01:11:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:23.631 01:11:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:24:23.631 01:11:35 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:23.631 01:11:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:23.631 01:11:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:23.631 01:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.631 01:11:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:24:23.631 01:11:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:24:23.631 01:11:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:24:23.631 01:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:24:23.889 01:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:24:23.889 01:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:24:23.889 01:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:24:23.889 01:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:24:23.889 01:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:24:23.889 01:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:24:23.889 01:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:24:23.889 01:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:24:23.889 01:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:24:23.889 01:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:24:23.889 01:11:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:24:23.889 01:11:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:24:23.889 01:11:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:24:23.889 01:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:23.889 01:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:23.889 01:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:23.889 ************************************ 00:24:23.889 START TEST spdk_target_abort 00:24:23.889 ************************************ 00:24:23.889 01:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:24:23.889 01:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:24:23.889 01:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:24:23.889 01:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.889 01:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:27.167 spdk_targetn1 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:27.167 [2024-05-15 01:11:38.915108] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:27.167 [2024-05-15 01:11:38.947106] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:27.167 [2024-05-15 01:11:38.947391] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:27.167 01:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:27.167 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.450 Initializing NVMe Controllers 00:24:30.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:30.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:30.450 Initialization complete. Launching workers. 00:24:30.450 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9363, failed: 0 00:24:30.450 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1328, failed to submit 8035 00:24:30.450 success 855, unsuccess 473, failed 0 00:24:30.450 01:11:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:30.450 01:11:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:30.450 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.731 Initializing NVMe Controllers 00:24:33.731 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:33.731 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:33.731 Initialization complete. Launching workers. 00:24:33.732 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8676, failed: 0 00:24:33.732 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1230, failed to submit 7446 00:24:33.732 success 293, unsuccess 937, failed 0 00:24:33.732 01:11:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:33.732 01:11:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:33.732 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.064 Initializing NVMe Controllers 00:24:37.064 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:37.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:37.064 Initialization complete. Launching workers. 00:24:37.064 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31050, failed: 0 00:24:37.064 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2410, failed to submit 28640 00:24:37.064 success 539, unsuccess 1871, failed 0 00:24:37.064 01:11:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:24:37.064 01:11:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.064 01:11:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:37.064 01:11:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.064 01:11:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:24:37.064 01:11:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.064 01:11:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:37.999 01:11:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.999 01:11:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1369855 00:24:37.999 01:11:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 1369855 ']' 00:24:37.999 01:11:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 1369855 00:24:37.999 01:11:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:24:37.999 01:11:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:37.999 01:11:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1369855 00:24:37.999 01:11:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:37.999 01:11:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:37.999 01:11:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1369855' 00:24:37.999 killing process with pid 1369855 00:24:37.999 01:11:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 1369855 00:24:37.999 [2024-05-15 01:11:50.089791] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:37.999 01:11:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 1369855 00:24:37.999 00:24:37.999 real 0m14.305s 00:24:37.999 user 0m52.180s 00:24:37.999 sys 0m3.407s 00:24:37.999 01:11:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:37.999 01:11:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:37.999 ************************************ 00:24:37.999 END TEST spdk_target_abort 00:24:37.999 ************************************ 00:24:38.258 01:11:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:24:38.258 01:11:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:38.258 01:11:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:38.258 01:11:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:38.258 ************************************ 00:24:38.258 START TEST kernel_target_abort 00:24:38.258 ************************************ 00:24:38.258 01:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:24:38.258 01:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:24:38.258 01:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@728 -- # local ip 00:24:38.258 01:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@729 -- # ip_candidates=() 00:24:38.258 01:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@729 -- # local -A ip_candidates 00:24:38.258 01:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.258 01:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.258 01:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:24:38.258 01:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.258 01:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:24:38.258 01:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:24:38.259 01:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:24:38.259 01:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:38.259 01:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:38.259 01:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:38.259 01:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:38.259 01:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:38.259 01:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:38.259 01:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:24:38.259 01:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:38.259 01:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:38.259 01:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:38.259 01:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:39.633 Waiting for block devices as requested 00:24:39.633 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:24:39.633 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:39.633 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:39.892 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:39.892 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:39.892 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:39.892 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:40.151 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:40.151 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:40.151 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:40.151 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:40.411 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:40.411 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:40.411 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:40.411 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:40.670 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:40.670 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:40.670 01:11:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:40.670 01:11:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:40.670 01:11:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:40.670 01:11:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:24:40.670 01:11:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:40.670 01:11:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:24:40.670 01:11:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:40.670 01:11:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:40.670 01:11:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:40.670 No valid GPT data, bailing 00:24:40.670 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:40.670 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:24:40.670 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:24:40.670 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:40.670 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:40.670 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:40.670 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:40.670 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:40.670 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:40.670 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:24:40.670 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:40.670 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:24:40.670 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:40.670 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:24:40.670 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:24:40.670 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:24:40.670 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:40.670 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:24:40.670 00:24:40.670 Discovery Log Number of Records 2, Generation counter 2 00:24:40.670 =====Discovery Log Entry 0====== 00:24:40.670 trtype: tcp 00:24:40.670 adrfam: ipv4 00:24:40.670 subtype: current discovery subsystem 00:24:40.670 treq: not specified, sq flow control disable supported 00:24:40.670 portid: 1 00:24:40.670 trsvcid: 4420 00:24:40.670 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:40.670 traddr: 10.0.0.1 00:24:40.670 eflags: none 00:24:40.670 sectype: none 00:24:40.670 =====Discovery Log Entry 1====== 00:24:40.670 trtype: tcp 00:24:40.670 adrfam: ipv4 00:24:40.670 subtype: nvme subsystem 00:24:40.670 treq: not specified, sq flow control disable supported 00:24:40.670 portid: 1 00:24:40.670 trsvcid: 4420 00:24:40.670 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:40.670 traddr: 10.0.0.1 00:24:40.670 eflags: none 00:24:40.670 sectype: none 00:24:40.671 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:24:40.671 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:40.671 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:40.671 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:24:40.671 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:40.671 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:40.671 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:40.671 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:40.671 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:40.671 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.671 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:40.671 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.671 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:40.671 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.671 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:24:40.671 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.671 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:24:40.671 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.671 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:40.671 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:40.671 01:11:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:40.929 EAL: No free 2048 kB hugepages reported on node 1 00:24:44.208 Initializing NVMe Controllers 00:24:44.208 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:44.208 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:44.208 Initialization complete. Launching workers. 00:24:44.208 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 26285, failed: 0 00:24:44.209 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26285, failed to submit 0 00:24:44.209 success 0, unsuccess 26285, failed 0 00:24:44.209 01:11:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:44.209 01:11:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:44.209 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.489 Initializing NVMe Controllers 00:24:47.489 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:47.489 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:47.489 Initialization complete. Launching workers. 00:24:47.489 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 54675, failed: 0 00:24:47.489 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13762, failed to submit 40913 00:24:47.489 success 0, unsuccess 13762, failed 0 00:24:47.489 01:11:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:47.489 01:11:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:47.489 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.017 Initializing NVMe Controllers 00:24:50.017 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:50.017 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:50.017 Initialization complete. Launching workers. 00:24:50.017 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 55130, failed: 0 00:24:50.017 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13758, failed to submit 41372 00:24:50.017 success 0, unsuccess 13758, failed 0 00:24:50.017 01:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:24:50.017 01:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:50.017 01:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:24:50.017 01:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:50.017 01:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:50.017 01:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:50.017 01:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:50.017 01:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:50.017 01:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:50.017 01:12:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:51.396 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:51.396 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:51.396 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:51.396 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:51.396 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:51.396 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:51.396 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:51.396 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:51.396 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:51.396 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:51.396 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:51.396 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:51.396 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:51.396 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:51.396 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:51.396 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:52.332 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:24:52.591 00:24:52.591 real 0m14.384s 00:24:52.591 user 0m4.522s 00:24:52.591 sys 0m3.597s 00:24:52.591 01:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:52.591 01:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:52.591 ************************************ 00:24:52.591 END TEST kernel_target_abort 00:24:52.591 ************************************ 00:24:52.591 01:12:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:52.591 01:12:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:24:52.591 01:12:04 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:52.591 01:12:04 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:24:52.591 01:12:04 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:52.591 01:12:04 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:24:52.591 01:12:04 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:52.591 01:12:04 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:52.591 rmmod nvme_tcp 00:24:52.591 rmmod nvme_fabrics 00:24:52.591 rmmod nvme_keyring 00:24:52.591 01:12:04 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:52.591 01:12:04 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:24:52.591 01:12:04 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:24:52.591 01:12:04 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1369855 ']' 00:24:52.591 01:12:04 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1369855 00:24:52.591 01:12:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 1369855 ']' 00:24:52.591 01:12:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 1369855 00:24:52.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1369855) - No such process 00:24:52.591 01:12:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 1369855 is not found' 00:24:52.591 Process with pid 1369855 is not found 00:24:52.591 01:12:04 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:24:52.591 01:12:04 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:53.986 Waiting for block devices as requested 00:24:53.986 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:24:53.986 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:53.986 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:54.268 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:54.268 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:54.268 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:54.268 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:54.268 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:54.526 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:54.526 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:54.526 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:54.784 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:54.784 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:54.784 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:54.784 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:55.043 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:55.043 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:55.043 01:12:07 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:55.043 01:12:07 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:55.043 01:12:07 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:55.043 01:12:07 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:55.043 01:12:07 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.043 01:12:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:55.043 01:12:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.569 01:12:09 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:57.569 00:24:57.569 real 0m38.971s 00:24:57.569 user 0m59.054s 00:24:57.569 sys 0m10.979s 00:24:57.569 01:12:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:57.569 01:12:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:57.569 ************************************ 00:24:57.569 END TEST nvmf_abort_qd_sizes 00:24:57.569 ************************************ 00:24:57.569 01:12:09 -- spdk/autotest.sh@291 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:24:57.569 01:12:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:57.569 01:12:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:57.569 01:12:09 -- common/autotest_common.sh@10 -- # set +x 00:24:57.569 ************************************ 00:24:57.569 START TEST keyring_file 00:24:57.569 ************************************ 00:24:57.569 01:12:09 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:24:57.569 * Looking for test storage... 00:24:57.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:24:57.569 01:12:09 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:24:57.569 01:12:09 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.569 01:12:09 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:24:57.569 01:12:09 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.569 01:12:09 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.569 01:12:09 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.569 01:12:09 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.569 01:12:09 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.569 01:12:09 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.569 01:12:09 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.569 01:12:09 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.569 01:12:09 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.569 01:12:09 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.570 01:12:09 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:57.570 01:12:09 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:57.570 01:12:09 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.570 01:12:09 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.570 01:12:09 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.570 01:12:09 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.570 01:12:09 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.570 01:12:09 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.570 01:12:09 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.570 01:12:09 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.570 01:12:09 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.570 01:12:09 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.570 01:12:09 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.570 01:12:09 keyring_file -- paths/export.sh@5 -- # export PATH 00:24:57.570 01:12:09 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.570 01:12:09 keyring_file -- nvmf/common.sh@47 -- # : 0 00:24:57.570 01:12:09 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:57.570 01:12:09 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:57.570 01:12:09 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.570 01:12:09 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.570 01:12:09 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.570 01:12:09 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:57.570 01:12:09 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:57.570 01:12:09 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:57.570 01:12:09 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:57.570 01:12:09 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:57.570 01:12:09 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:57.570 01:12:09 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:24:57.570 01:12:09 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:24:57.570 01:12:09 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:24:57.570 01:12:09 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:57.570 01:12:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:57.570 01:12:09 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:57.570 01:12:09 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:57.570 01:12:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:57.570 01:12:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:57.570 01:12:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.GgjblstpCy 00:24:57.570 01:12:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:57.570 01:12:09 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:57.570 01:12:09 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:24:57.570 01:12:09 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:57.570 01:12:09 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:57.570 01:12:09 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:24:57.570 01:12:09 keyring_file -- nvmf/common.sh@705 -- # python - 00:24:57.570 01:12:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.GgjblstpCy 00:24:57.570 01:12:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.GgjblstpCy 00:24:57.570 01:12:09 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.GgjblstpCy 00:24:57.570 01:12:09 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:24:57.570 01:12:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:57.570 01:12:09 keyring_file -- keyring/common.sh@17 -- # name=key1 00:24:57.570 01:12:09 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:57.570 01:12:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:57.570 01:12:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:57.570 01:12:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.h67xHn65ED 00:24:57.570 01:12:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:57.570 01:12:09 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:57.570 01:12:09 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:24:57.570 01:12:09 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:57.570 01:12:09 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:24:57.570 01:12:09 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:24:57.570 01:12:09 keyring_file -- nvmf/common.sh@705 -- # python - 00:24:57.570 01:12:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.h67xHn65ED 00:24:57.570 01:12:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.h67xHn65ED 00:24:57.570 01:12:09 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.h67xHn65ED 00:24:57.570 01:12:09 keyring_file -- keyring/file.sh@30 -- # tgtpid=1376543 00:24:57.570 01:12:09 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:24:57.570 01:12:09 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1376543 00:24:57.570 01:12:09 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 1376543 ']' 00:24:57.570 01:12:09 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.570 01:12:09 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:57.570 01:12:09 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.570 01:12:09 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:57.570 01:12:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:57.570 [2024-05-15 01:12:09.635910] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:24:57.570 [2024-05-15 01:12:09.636009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1376543 ] 00:24:57.570 EAL: No free 2048 kB hugepages reported on node 1 00:24:57.570 [2024-05-15 01:12:09.709210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.570 [2024-05-15 01:12:09.828446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.503 01:12:10 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:58.503 01:12:10 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:24:58.503 01:12:10 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:24:58.503 01:12:10 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.503 01:12:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:58.503 [2024-05-15 01:12:10.591025] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:58.503 null0 00:24:58.503 [2024-05-15 01:12:10.623024] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:58.503 [2024-05-15 01:12:10.623107] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:58.503 [2024-05-15 01:12:10.623554] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:58.503 [2024-05-15 01:12:10.631068] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:58.503 01:12:10 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.503 01:12:10 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:58.503 01:12:10 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:58.503 01:12:10 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:58.503 01:12:10 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:58.503 01:12:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:58.503 01:12:10 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:58.503 01:12:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:58.503 01:12:10 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:58.503 01:12:10 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.503 01:12:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:58.503 [2024-05-15 01:12:10.639073] nvmf_rpc.c: 768:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:24:58.503 request: 00:24:58.503 { 00:24:58.503 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:24:58.503 "secure_channel": false, 00:24:58.503 "listen_address": { 00:24:58.503 "trtype": "tcp", 00:24:58.503 "traddr": "127.0.0.1", 00:24:58.503 "trsvcid": "4420" 00:24:58.503 }, 00:24:58.503 "method": "nvmf_subsystem_add_listener", 00:24:58.503 "req_id": 1 00:24:58.503 } 00:24:58.503 Got JSON-RPC error response 00:24:58.503 response: 00:24:58.503 { 00:24:58.503 "code": -32602, 00:24:58.503 "message": "Invalid parameters" 00:24:58.503 } 00:24:58.503 01:12:10 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:58.503 01:12:10 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:58.503 01:12:10 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:58.503 01:12:10 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:58.503 01:12:10 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:58.503 01:12:10 keyring_file -- keyring/file.sh@46 -- # bperfpid=1376678 00:24:58.503 01:12:10 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:24:58.503 01:12:10 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1376678 /var/tmp/bperf.sock 00:24:58.503 01:12:10 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 1376678 ']' 00:24:58.503 01:12:10 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:58.503 01:12:10 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:58.503 01:12:10 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:58.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:58.503 01:12:10 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:58.503 01:12:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:58.503 [2024-05-15 01:12:10.685775] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:24:58.503 [2024-05-15 01:12:10.685855] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1376678 ] 00:24:58.503 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.503 [2024-05-15 01:12:10.759121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.503 [2024-05-15 01:12:10.875224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:59.437 01:12:11 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:59.437 01:12:11 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:24:59.437 01:12:11 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GgjblstpCy 00:24:59.437 01:12:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GgjblstpCy 00:24:59.695 01:12:11 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.h67xHn65ED 00:24:59.695 01:12:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.h67xHn65ED 00:24:59.953 01:12:12 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:24:59.953 01:12:12 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:24:59.953 01:12:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:59.953 01:12:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:59.953 01:12:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:00.210 01:12:12 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.GgjblstpCy == \/\t\m\p\/\t\m\p\.\G\g\j\b\l\s\t\p\C\y ]] 00:25:00.210 01:12:12 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:25:00.210 01:12:12 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:25:00.211 01:12:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:00.211 01:12:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:00.211 01:12:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:00.211 01:12:12 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.h67xHn65ED == \/\t\m\p\/\t\m\p\.\h\6\7\x\H\n\6\5\E\D ]] 00:25:00.211 01:12:12 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:25:00.211 01:12:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:00.211 01:12:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:00.469 01:12:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:00.469 01:12:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:00.469 01:12:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:00.469 01:12:12 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:25:00.469 01:12:12 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:25:00.469 01:12:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:00.469 01:12:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:00.469 01:12:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:00.469 01:12:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:00.469 01:12:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:00.726 01:12:13 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:25:00.726 01:12:13 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:00.726 01:12:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:00.983 [2024-05-15 01:12:13.324770] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:01.241 nvme0n1 00:25:01.241 01:12:13 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:25:01.241 01:12:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:01.241 01:12:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:01.241 01:12:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:01.241 01:12:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:01.241 01:12:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:01.499 01:12:13 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:25:01.499 01:12:13 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:25:01.499 01:12:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:01.499 01:12:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:01.499 01:12:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:01.499 01:12:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:01.499 01:12:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:01.757 01:12:13 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:25:01.757 01:12:13 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:01.757 Running I/O for 1 seconds... 00:25:02.690 00:25:02.690 Latency(us) 00:25:02.690 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.690 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:25:02.690 nvme0n1 : 1.02 4266.75 16.67 0.00 0.00 29751.43 3737.98 39807.05 00:25:02.690 =================================================================================================================== 00:25:02.690 Total : 4266.75 16.67 0.00 0.00 29751.43 3737.98 39807.05 00:25:02.690 0 00:25:02.690 01:12:15 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:02.690 01:12:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:02.948 01:12:15 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:25:02.948 01:12:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:02.948 01:12:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:02.948 01:12:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:02.948 01:12:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:02.948 01:12:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:03.205 01:12:15 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:25:03.205 01:12:15 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:25:03.205 01:12:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:03.205 01:12:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:03.205 01:12:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:03.205 01:12:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:03.205 01:12:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:03.463 01:12:15 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:25:03.463 01:12:15 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:03.463 01:12:15 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:25:03.463 01:12:15 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:03.463 01:12:15 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:25:03.463 01:12:15 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:03.463 01:12:15 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:25:03.463 01:12:15 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:03.463 01:12:15 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:03.463 01:12:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:03.722 [2024-05-15 01:12:16.035792] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:03.722 [2024-05-15 01:12:16.035832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x113af30 (107): Transport endpoint is not connected 00:25:03.722 [2024-05-15 01:12:16.036822] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x113af30 (9): Bad file descriptor 00:25:03.722 [2024-05-15 01:12:16.037819] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:03.722 [2024-05-15 01:12:16.037841] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:03.722 [2024-05-15 01:12:16.037855] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:03.722 request: 00:25:03.722 { 00:25:03.722 "name": "nvme0", 00:25:03.722 "trtype": "tcp", 00:25:03.722 "traddr": "127.0.0.1", 00:25:03.722 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:03.722 "adrfam": "ipv4", 00:25:03.722 "trsvcid": "4420", 00:25:03.722 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:03.722 "psk": "key1", 00:25:03.722 "method": "bdev_nvme_attach_controller", 00:25:03.722 "req_id": 1 00:25:03.722 } 00:25:03.722 Got JSON-RPC error response 00:25:03.722 response: 00:25:03.722 { 00:25:03.722 "code": -32602, 00:25:03.722 "message": "Invalid parameters" 00:25:03.722 } 00:25:03.722 01:12:16 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:25:03.722 01:12:16 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:03.722 01:12:16 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:03.722 01:12:16 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:03.722 01:12:16 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:25:03.722 01:12:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:03.722 01:12:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:03.722 01:12:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:03.722 01:12:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:03.722 01:12:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:03.981 01:12:16 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:25:03.981 01:12:16 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:25:03.981 01:12:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:03.981 01:12:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:03.981 01:12:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:03.981 01:12:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:03.981 01:12:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:04.239 01:12:16 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:25:04.239 01:12:16 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:25:04.239 01:12:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:04.496 01:12:16 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:25:04.496 01:12:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:25:04.754 01:12:17 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:25:04.754 01:12:17 keyring_file -- keyring/file.sh@77 -- # jq length 00:25:04.754 01:12:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:05.012 01:12:17 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:25:05.012 01:12:17 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.GgjblstpCy 00:25:05.012 01:12:17 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.GgjblstpCy 00:25:05.012 01:12:17 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:25:05.012 01:12:17 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.GgjblstpCy 00:25:05.012 01:12:17 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:25:05.012 01:12:17 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:05.012 01:12:17 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:25:05.012 01:12:17 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:05.012 01:12:17 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GgjblstpCy 00:25:05.012 01:12:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GgjblstpCy 00:25:05.270 [2024-05-15 01:12:17.526704] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.GgjblstpCy': 0100660 00:25:05.270 [2024-05-15 01:12:17.526744] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:05.270 request: 00:25:05.270 { 00:25:05.270 "name": "key0", 00:25:05.270 "path": "/tmp/tmp.GgjblstpCy", 00:25:05.270 "method": "keyring_file_add_key", 00:25:05.270 "req_id": 1 00:25:05.270 } 00:25:05.270 Got JSON-RPC error response 00:25:05.270 response: 00:25:05.270 { 00:25:05.270 "code": -1, 00:25:05.270 "message": "Operation not permitted" 00:25:05.270 } 00:25:05.270 01:12:17 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:25:05.270 01:12:17 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:05.270 01:12:17 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:05.270 01:12:17 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:05.270 01:12:17 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.GgjblstpCy 00:25:05.270 01:12:17 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GgjblstpCy 00:25:05.270 01:12:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GgjblstpCy 00:25:05.527 01:12:17 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.GgjblstpCy 00:25:05.527 01:12:17 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:25:05.527 01:12:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:05.527 01:12:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:05.527 01:12:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:05.527 01:12:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:05.527 01:12:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:05.785 01:12:18 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:25:05.785 01:12:18 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:05.785 01:12:18 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:25:05.785 01:12:18 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:05.785 01:12:18 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:25:05.785 01:12:18 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:05.785 01:12:18 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:25:05.785 01:12:18 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:05.785 01:12:18 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:05.785 01:12:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:06.043 [2024-05-15 01:12:18.240662] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.GgjblstpCy': No such file or directory 00:25:06.043 [2024-05-15 01:12:18.240697] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:25:06.043 [2024-05-15 01:12:18.240728] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:25:06.043 [2024-05-15 01:12:18.240740] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:06.043 [2024-05-15 01:12:18.240753] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:25:06.043 request: 00:25:06.043 { 00:25:06.043 "name": "nvme0", 00:25:06.043 "trtype": "tcp", 00:25:06.043 "traddr": "127.0.0.1", 00:25:06.043 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:06.043 "adrfam": "ipv4", 00:25:06.043 "trsvcid": "4420", 00:25:06.043 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:06.043 "psk": "key0", 00:25:06.043 "method": "bdev_nvme_attach_controller", 00:25:06.043 "req_id": 1 00:25:06.043 } 00:25:06.043 Got JSON-RPC error response 00:25:06.043 response: 00:25:06.043 { 00:25:06.043 "code": -19, 00:25:06.043 "message": "No such device" 00:25:06.043 } 00:25:06.043 01:12:18 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:25:06.043 01:12:18 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:06.043 01:12:18 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:06.043 01:12:18 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:06.043 01:12:18 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:25:06.043 01:12:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:06.300 01:12:18 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:25:06.300 01:12:18 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:06.300 01:12:18 keyring_file -- keyring/common.sh@17 -- # name=key0 00:25:06.301 01:12:18 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:06.301 01:12:18 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:06.301 01:12:18 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:06.301 01:12:18 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.2uIZbhBsAs 00:25:06.301 01:12:18 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:06.301 01:12:18 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:06.301 01:12:18 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:25:06.301 01:12:18 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:06.301 01:12:18 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:25:06.301 01:12:18 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:25:06.301 01:12:18 keyring_file -- nvmf/common.sh@705 -- # python - 00:25:06.301 01:12:18 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.2uIZbhBsAs 00:25:06.301 01:12:18 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.2uIZbhBsAs 00:25:06.301 01:12:18 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.2uIZbhBsAs 00:25:06.301 01:12:18 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2uIZbhBsAs 00:25:06.301 01:12:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2uIZbhBsAs 00:25:06.559 01:12:18 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:06.559 01:12:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:06.817 nvme0n1 00:25:06.817 01:12:19 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:25:06.817 01:12:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:06.817 01:12:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:06.817 01:12:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:06.817 01:12:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:06.817 01:12:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:07.075 01:12:19 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:25:07.075 01:12:19 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:25:07.075 01:12:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:07.334 01:12:19 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:25:07.334 01:12:19 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:25:07.334 01:12:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:07.334 01:12:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:07.334 01:12:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:07.623 01:12:19 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:25:07.623 01:12:19 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:25:07.623 01:12:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:07.623 01:12:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:07.623 01:12:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:07.623 01:12:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:07.623 01:12:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:07.892 01:12:20 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:25:07.892 01:12:20 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:07.892 01:12:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:08.150 01:12:20 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:25:08.150 01:12:20 keyring_file -- keyring/file.sh@104 -- # jq length 00:25:08.150 01:12:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:08.407 01:12:20 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:25:08.407 01:12:20 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2uIZbhBsAs 00:25:08.407 01:12:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2uIZbhBsAs 00:25:08.665 01:12:20 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.h67xHn65ED 00:25:08.665 01:12:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.h67xHn65ED 00:25:08.922 01:12:21 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:08.922 01:12:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:09.180 nvme0n1 00:25:09.180 01:12:21 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:25:09.180 01:12:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:25:09.439 01:12:21 keyring_file -- keyring/file.sh@112 -- # config='{ 00:25:09.439 "subsystems": [ 00:25:09.439 { 00:25:09.439 "subsystem": "keyring", 00:25:09.439 "config": [ 00:25:09.439 { 00:25:09.439 "method": "keyring_file_add_key", 00:25:09.439 "params": { 00:25:09.439 "name": "key0", 00:25:09.439 "path": "/tmp/tmp.2uIZbhBsAs" 00:25:09.439 } 00:25:09.439 }, 00:25:09.439 { 00:25:09.439 "method": "keyring_file_add_key", 00:25:09.439 "params": { 00:25:09.439 "name": "key1", 00:25:09.439 "path": "/tmp/tmp.h67xHn65ED" 00:25:09.439 } 00:25:09.439 } 00:25:09.439 ] 00:25:09.439 }, 00:25:09.439 { 00:25:09.439 "subsystem": "iobuf", 00:25:09.439 "config": [ 00:25:09.439 { 00:25:09.439 "method": "iobuf_set_options", 00:25:09.439 "params": { 00:25:09.439 "small_pool_count": 8192, 00:25:09.439 "large_pool_count": 1024, 00:25:09.439 "small_bufsize": 8192, 00:25:09.439 "large_bufsize": 135168 00:25:09.439 } 00:25:09.439 } 00:25:09.439 ] 00:25:09.439 }, 00:25:09.439 { 00:25:09.439 "subsystem": "sock", 00:25:09.439 "config": [ 00:25:09.439 { 00:25:09.439 "method": "sock_impl_set_options", 00:25:09.439 "params": { 00:25:09.439 "impl_name": "posix", 00:25:09.439 "recv_buf_size": 2097152, 00:25:09.439 "send_buf_size": 2097152, 00:25:09.439 "enable_recv_pipe": true, 00:25:09.439 "enable_quickack": false, 00:25:09.439 "enable_placement_id": 0, 00:25:09.439 "enable_zerocopy_send_server": true, 00:25:09.439 "enable_zerocopy_send_client": false, 00:25:09.439 "zerocopy_threshold": 0, 00:25:09.439 "tls_version": 0, 00:25:09.439 "enable_ktls": false 00:25:09.439 } 00:25:09.439 }, 00:25:09.439 { 00:25:09.439 "method": "sock_impl_set_options", 00:25:09.439 "params": { 00:25:09.439 "impl_name": "ssl", 00:25:09.439 "recv_buf_size": 4096, 00:25:09.439 "send_buf_size": 4096, 00:25:09.439 "enable_recv_pipe": true, 00:25:09.439 "enable_quickack": false, 00:25:09.439 "enable_placement_id": 0, 00:25:09.439 "enable_zerocopy_send_server": true, 00:25:09.439 "enable_zerocopy_send_client": false, 00:25:09.439 "zerocopy_threshold": 0, 00:25:09.439 "tls_version": 0, 00:25:09.439 "enable_ktls": false 00:25:09.439 } 00:25:09.439 } 00:25:09.439 ] 00:25:09.439 }, 00:25:09.439 { 00:25:09.439 "subsystem": "vmd", 00:25:09.439 "config": [] 00:25:09.439 }, 00:25:09.439 { 00:25:09.439 "subsystem": "accel", 00:25:09.439 "config": [ 00:25:09.439 { 00:25:09.439 "method": "accel_set_options", 00:25:09.439 "params": { 00:25:09.439 "small_cache_size": 128, 00:25:09.439 "large_cache_size": 16, 00:25:09.439 "task_count": 2048, 00:25:09.439 "sequence_count": 2048, 00:25:09.439 "buf_count": 2048 00:25:09.439 } 00:25:09.439 } 00:25:09.439 ] 00:25:09.439 }, 00:25:09.439 { 00:25:09.439 "subsystem": "bdev", 00:25:09.439 "config": [ 00:25:09.439 { 00:25:09.439 "method": "bdev_set_options", 00:25:09.439 "params": { 00:25:09.439 "bdev_io_pool_size": 65535, 00:25:09.439 "bdev_io_cache_size": 256, 00:25:09.439 "bdev_auto_examine": true, 00:25:09.439 "iobuf_small_cache_size": 128, 00:25:09.439 "iobuf_large_cache_size": 16 00:25:09.439 } 00:25:09.439 }, 00:25:09.439 { 00:25:09.439 "method": "bdev_raid_set_options", 00:25:09.439 "params": { 00:25:09.439 "process_window_size_kb": 1024 00:25:09.439 } 00:25:09.439 }, 00:25:09.439 { 00:25:09.439 "method": "bdev_iscsi_set_options", 00:25:09.439 "params": { 00:25:09.439 "timeout_sec": 30 00:25:09.439 } 00:25:09.439 }, 00:25:09.439 { 00:25:09.439 "method": "bdev_nvme_set_options", 00:25:09.439 "params": { 00:25:09.439 "action_on_timeout": "none", 00:25:09.439 "timeout_us": 0, 00:25:09.439 "timeout_admin_us": 0, 00:25:09.439 "keep_alive_timeout_ms": 10000, 00:25:09.439 "arbitration_burst": 0, 00:25:09.439 "low_priority_weight": 0, 00:25:09.439 "medium_priority_weight": 0, 00:25:09.439 "high_priority_weight": 0, 00:25:09.439 "nvme_adminq_poll_period_us": 10000, 00:25:09.439 "nvme_ioq_poll_period_us": 0, 00:25:09.439 "io_queue_requests": 512, 00:25:09.439 "delay_cmd_submit": true, 00:25:09.439 "transport_retry_count": 4, 00:25:09.439 "bdev_retry_count": 3, 00:25:09.439 "transport_ack_timeout": 0, 00:25:09.439 "ctrlr_loss_timeout_sec": 0, 00:25:09.439 "reconnect_delay_sec": 0, 00:25:09.440 "fast_io_fail_timeout_sec": 0, 00:25:09.440 "disable_auto_failback": false, 00:25:09.440 "generate_uuids": false, 00:25:09.440 "transport_tos": 0, 00:25:09.440 "nvme_error_stat": false, 00:25:09.440 "rdma_srq_size": 0, 00:25:09.440 "io_path_stat": false, 00:25:09.440 "allow_accel_sequence": false, 00:25:09.440 "rdma_max_cq_size": 0, 00:25:09.440 "rdma_cm_event_timeout_ms": 0, 00:25:09.440 "dhchap_digests": [ 00:25:09.440 "sha256", 00:25:09.440 "sha384", 00:25:09.440 "sha512" 00:25:09.440 ], 00:25:09.440 "dhchap_dhgroups": [ 00:25:09.440 "null", 00:25:09.440 "ffdhe2048", 00:25:09.440 "ffdhe3072", 00:25:09.440 "ffdhe4096", 00:25:09.440 "ffdhe6144", 00:25:09.440 "ffdhe8192" 00:25:09.440 ] 00:25:09.440 } 00:25:09.440 }, 00:25:09.440 { 00:25:09.440 "method": "bdev_nvme_attach_controller", 00:25:09.440 "params": { 00:25:09.440 "name": "nvme0", 00:25:09.440 "trtype": "TCP", 00:25:09.440 "adrfam": "IPv4", 00:25:09.440 "traddr": "127.0.0.1", 00:25:09.440 "trsvcid": "4420", 00:25:09.440 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:09.440 "prchk_reftag": false, 00:25:09.440 "prchk_guard": false, 00:25:09.440 "ctrlr_loss_timeout_sec": 0, 00:25:09.440 "reconnect_delay_sec": 0, 00:25:09.440 "fast_io_fail_timeout_sec": 0, 00:25:09.440 "psk": "key0", 00:25:09.440 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:09.440 "hdgst": false, 00:25:09.440 "ddgst": false 00:25:09.440 } 00:25:09.440 }, 00:25:09.440 { 00:25:09.440 "method": "bdev_nvme_set_hotplug", 00:25:09.440 "params": { 00:25:09.440 "period_us": 100000, 00:25:09.440 "enable": false 00:25:09.440 } 00:25:09.440 }, 00:25:09.440 { 00:25:09.440 "method": "bdev_wait_for_examine" 00:25:09.440 } 00:25:09.440 ] 00:25:09.440 }, 00:25:09.440 { 00:25:09.440 "subsystem": "nbd", 00:25:09.440 "config": [] 00:25:09.440 } 00:25:09.440 ] 00:25:09.440 }' 00:25:09.440 01:12:21 keyring_file -- keyring/file.sh@114 -- # killprocess 1376678 00:25:09.440 01:12:21 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 1376678 ']' 00:25:09.440 01:12:21 keyring_file -- common/autotest_common.sh@950 -- # kill -0 1376678 00:25:09.440 01:12:21 keyring_file -- common/autotest_common.sh@951 -- # uname 00:25:09.440 01:12:21 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:09.440 01:12:21 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1376678 00:25:09.440 01:12:21 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:09.440 01:12:21 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:09.440 01:12:21 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1376678' 00:25:09.440 killing process with pid 1376678 00:25:09.440 01:12:21 keyring_file -- common/autotest_common.sh@965 -- # kill 1376678 00:25:09.440 Received shutdown signal, test time was about 1.000000 seconds 00:25:09.440 00:25:09.440 Latency(us) 00:25:09.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.440 =================================================================================================================== 00:25:09.440 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:09.440 01:12:21 keyring_file -- common/autotest_common.sh@970 -- # wait 1376678 00:25:09.698 01:12:22 keyring_file -- keyring/file.sh@117 -- # bperfpid=1378149 00:25:09.698 01:12:22 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1378149 /var/tmp/bperf.sock 00:25:09.698 01:12:22 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 1378149 ']' 00:25:09.698 01:12:22 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:09.698 01:12:22 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:25:09.698 01:12:22 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:09.698 01:12:22 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:09.698 01:12:22 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:25:09.698 "subsystems": [ 00:25:09.698 { 00:25:09.698 "subsystem": "keyring", 00:25:09.698 "config": [ 00:25:09.698 { 00:25:09.698 "method": "keyring_file_add_key", 00:25:09.698 "params": { 00:25:09.698 "name": "key0", 00:25:09.698 "path": "/tmp/tmp.2uIZbhBsAs" 00:25:09.698 } 00:25:09.698 }, 00:25:09.698 { 00:25:09.698 "method": "keyring_file_add_key", 00:25:09.698 "params": { 00:25:09.698 "name": "key1", 00:25:09.698 "path": "/tmp/tmp.h67xHn65ED" 00:25:09.698 } 00:25:09.698 } 00:25:09.698 ] 00:25:09.698 }, 00:25:09.698 { 00:25:09.698 "subsystem": "iobuf", 00:25:09.698 "config": [ 00:25:09.698 { 00:25:09.698 "method": "iobuf_set_options", 00:25:09.698 "params": { 00:25:09.698 "small_pool_count": 8192, 00:25:09.698 "large_pool_count": 1024, 00:25:09.698 "small_bufsize": 8192, 00:25:09.698 "large_bufsize": 135168 00:25:09.698 } 00:25:09.698 } 00:25:09.698 ] 00:25:09.698 }, 00:25:09.698 { 00:25:09.698 "subsystem": "sock", 00:25:09.698 "config": [ 00:25:09.698 { 00:25:09.698 "method": "sock_impl_set_options", 00:25:09.698 "params": { 00:25:09.698 "impl_name": "posix", 00:25:09.698 "recv_buf_size": 2097152, 00:25:09.698 "send_buf_size": 2097152, 00:25:09.698 "enable_recv_pipe": true, 00:25:09.698 "enable_quickack": false, 00:25:09.698 "enable_placement_id": 0, 00:25:09.698 "enable_zerocopy_send_server": true, 00:25:09.698 "enable_zerocopy_send_client": false, 00:25:09.698 "zerocopy_threshold": 0, 00:25:09.698 "tls_version": 0, 00:25:09.698 "enable_ktls": false 00:25:09.698 } 00:25:09.698 }, 00:25:09.698 { 00:25:09.698 "method": "sock_impl_set_options", 00:25:09.698 "params": { 00:25:09.698 "impl_name": "ssl", 00:25:09.698 "recv_buf_size": 4096, 00:25:09.698 "send_buf_size": 4096, 00:25:09.698 "enable_recv_pipe": true, 00:25:09.698 "enable_quickack": false, 00:25:09.698 "enable_placement_id": 0, 00:25:09.698 "enable_zerocopy_send_server": true, 00:25:09.698 "enable_zerocopy_send_client": false, 00:25:09.698 "zerocopy_threshold": 0, 00:25:09.698 "tls_version": 0, 00:25:09.698 "enable_ktls": false 00:25:09.698 } 00:25:09.698 } 00:25:09.698 ] 00:25:09.698 }, 00:25:09.698 { 00:25:09.698 "subsystem": "vmd", 00:25:09.698 "config": [] 00:25:09.698 }, 00:25:09.698 { 00:25:09.698 "subsystem": "accel", 00:25:09.698 "config": [ 00:25:09.698 { 00:25:09.698 "method": "accel_set_options", 00:25:09.698 "params": { 00:25:09.698 "small_cache_size": 128, 00:25:09.698 "large_cache_size": 16, 00:25:09.698 "task_count": 2048, 00:25:09.698 "sequence_count": 2048, 00:25:09.698 "buf_count": 2048 00:25:09.698 } 00:25:09.698 } 00:25:09.698 ] 00:25:09.698 }, 00:25:09.698 { 00:25:09.698 "subsystem": "bdev", 00:25:09.698 "config": [ 00:25:09.698 { 00:25:09.698 "method": "bdev_set_options", 00:25:09.698 "params": { 00:25:09.698 "bdev_io_pool_size": 65535, 00:25:09.698 "bdev_io_cache_size": 256, 00:25:09.698 "bdev_auto_examine": true, 00:25:09.698 "iobuf_small_cache_size": 128, 00:25:09.698 "iobuf_large_cache_size": 16 00:25:09.698 } 00:25:09.698 }, 00:25:09.698 { 00:25:09.698 "method": "bdev_raid_set_options", 00:25:09.698 "params": { 00:25:09.698 "process_window_size_kb": 1024 00:25:09.698 } 00:25:09.698 }, 00:25:09.698 { 00:25:09.698 "method": "bdev_iscsi_set_options", 00:25:09.698 "params": { 00:25:09.698 "timeout_sec": 30 00:25:09.698 } 00:25:09.698 }, 00:25:09.698 { 00:25:09.698 "method": "bdev_nvme_set_options", 00:25:09.698 "params": { 00:25:09.698 "action_on_timeout": "none", 00:25:09.698 "timeout_us": 0, 00:25:09.698 "timeout_admin_us": 0, 00:25:09.698 "keep_alive_timeout_ms": 10000, 00:25:09.698 "arbitration_burst": 0, 00:25:09.698 "low_priority_weight": 0, 00:25:09.698 "medium_priority_weight": 0, 00:25:09.698 "high_priority_weight": 0, 00:25:09.698 "nvme_adminq_poll_period_us": 10000, 00:25:09.698 "nvme_ioq_poll_period_us": 0, 00:25:09.698 "io_queue_requests": 512, 00:25:09.698 "delay_cmd_submit": true, 00:25:09.698 "transport_retry_count": 4, 00:25:09.698 "bdev_retry_count": 3, 00:25:09.698 "transport_ack_timeout": 0, 00:25:09.698 "ctrlr_loss_timeout_sec": 0, 00:25:09.698 "reconnect_delay_sec": 0, 00:25:09.698 "fast_io_fail_timeout_sec": 0, 00:25:09.698 "disable_auto_failback": false, 00:25:09.698 "generate_uuids": false, 00:25:09.698 "transport_tos": 0, 00:25:09.698 "nvme_error_stat": false, 00:25:09.698 "rdma_srq_size": 0, 00:25:09.698 "io_path_stat": false, 00:25:09.698 "allow_accel_sequence": false, 00:25:09.698 "rdma_max_cq_size": 0, 00:25:09.698 "rdma_cm_event_timeout_ms": 0, 00:25:09.698 "dhchap_digests": [ 00:25:09.698 "sha256", 00:25:09.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:09.698 "sha384", 00:25:09.698 "sha512" 00:25:09.698 ], 00:25:09.698 "dhchap_dhgroups": [ 00:25:09.698 "null", 00:25:09.698 "ffdhe2048", 00:25:09.698 "ffdhe3072", 00:25:09.698 "ffdhe4096", 00:25:09.698 "ffdhe6144", 00:25:09.698 "ffdhe8192" 00:25:09.698 ] 00:25:09.698 } 00:25:09.698 }, 00:25:09.698 { 00:25:09.699 "method": "bdev_nvme_attach_controller", 00:25:09.699 "params": { 00:25:09.699 "name": "nvme0", 00:25:09.699 "trtype": "TCP", 00:25:09.699 "adrfam": "IPv4", 00:25:09.699 "traddr": "127.0.0.1", 00:25:09.699 "trsvcid": "4420", 00:25:09.699 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:09.699 "prchk_reftag": false, 00:25:09.699 "prchk_guard": false, 00:25:09.699 "ctrlr_loss_timeout_sec": 0, 00:25:09.699 "reconnect_delay_sec": 0, 00:25:09.699 "fast_io_fail_timeout_sec": 0, 00:25:09.699 "psk": "key0", 00:25:09.699 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:09.699 "hdgst": false, 00:25:09.699 "ddgst": false 00:25:09.699 } 00:25:09.699 }, 00:25:09.699 { 00:25:09.699 "method": "bdev_nvme_set_hotplug", 00:25:09.699 "params": { 00:25:09.699 "period_us": 100000, 00:25:09.699 "enable": false 00:25:09.699 } 00:25:09.699 }, 00:25:09.699 { 00:25:09.699 "method": "bdev_wait_for_examine" 00:25:09.699 } 00:25:09.699 ] 00:25:09.699 }, 00:25:09.699 { 00:25:09.699 "subsystem": "nbd", 00:25:09.699 "config": [] 00:25:09.699 } 00:25:09.699 ] 00:25:09.699 }' 00:25:09.699 01:12:22 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:09.699 01:12:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:09.699 [2024-05-15 01:12:22.081196] Starting SPDK v24.05-pre git sha1 297733650 / DPDK 23.11.0 initialization... 00:25:09.699 [2024-05-15 01:12:22.081309] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1378149 ] 00:25:09.957 EAL: No free 2048 kB hugepages reported on node 1 00:25:09.957 [2024-05-15 01:12:22.154878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.957 [2024-05-15 01:12:22.271479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:10.215 [2024-05-15 01:12:22.457399] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:10.781 01:12:23 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:10.781 01:12:23 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:25:10.781 01:12:23 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:25:10.781 01:12:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:10.781 01:12:23 keyring_file -- keyring/file.sh@120 -- # jq length 00:25:11.038 01:12:23 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:25:11.038 01:12:23 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:25:11.038 01:12:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:11.038 01:12:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:11.038 01:12:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:11.038 01:12:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:11.038 01:12:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:11.296 01:12:23 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:25:11.296 01:12:23 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:25:11.296 01:12:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:11.296 01:12:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:11.296 01:12:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:11.296 01:12:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:11.296 01:12:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:11.553 01:12:23 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:25:11.553 01:12:23 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:25:11.553 01:12:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:25:11.553 01:12:23 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:25:11.811 01:12:24 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:25:11.811 01:12:24 keyring_file -- keyring/file.sh@1 -- # cleanup 00:25:11.811 01:12:24 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.2uIZbhBsAs /tmp/tmp.h67xHn65ED 00:25:11.811 01:12:24 keyring_file -- keyring/file.sh@20 -- # killprocess 1378149 00:25:11.811 01:12:24 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 1378149 ']' 00:25:11.811 01:12:24 keyring_file -- common/autotest_common.sh@950 -- # kill -0 1378149 00:25:11.811 01:12:24 keyring_file -- common/autotest_common.sh@951 -- # uname 00:25:11.811 01:12:24 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:11.811 01:12:24 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1378149 00:25:11.811 01:12:24 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:11.811 01:12:24 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:11.811 01:12:24 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1378149' 00:25:11.811 killing process with pid 1378149 00:25:11.811 01:12:24 keyring_file -- common/autotest_common.sh@965 -- # kill 1378149 00:25:11.811 Received shutdown signal, test time was about 1.000000 seconds 00:25:11.811 00:25:11.811 Latency(us) 00:25:11.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.811 =================================================================================================================== 00:25:11.811 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:11.811 01:12:24 keyring_file -- common/autotest_common.sh@970 -- # wait 1378149 00:25:12.069 01:12:24 keyring_file -- keyring/file.sh@21 -- # killprocess 1376543 00:25:12.069 01:12:24 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 1376543 ']' 00:25:12.069 01:12:24 keyring_file -- common/autotest_common.sh@950 -- # kill -0 1376543 00:25:12.069 01:12:24 keyring_file -- common/autotest_common.sh@951 -- # uname 00:25:12.069 01:12:24 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:12.069 01:12:24 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1376543 00:25:12.069 01:12:24 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:12.069 01:12:24 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:12.069 01:12:24 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1376543' 00:25:12.069 killing process with pid 1376543 00:25:12.069 01:12:24 keyring_file -- common/autotest_common.sh@965 -- # kill 1376543 00:25:12.069 [2024-05-15 01:12:24.353473] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:12.069 [2024-05-15 01:12:24.353532] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:12.069 01:12:24 keyring_file -- common/autotest_common.sh@970 -- # wait 1376543 00:25:12.635 00:25:12.635 real 0m15.392s 00:25:12.635 user 0m36.905s 00:25:12.635 sys 0m3.384s 00:25:12.635 01:12:24 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:12.635 01:12:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:12.635 ************************************ 00:25:12.635 END TEST keyring_file 00:25:12.635 ************************************ 00:25:12.635 01:12:24 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:25:12.635 01:12:24 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:25:12.635 01:12:24 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:25:12.635 01:12:24 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:25:12.635 01:12:24 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:25:12.635 01:12:24 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:25:12.635 01:12:24 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:25:12.635 01:12:24 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:25:12.635 01:12:24 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:25:12.635 01:12:24 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:25:12.635 01:12:24 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:25:12.635 01:12:24 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:25:12.635 01:12:24 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:25:12.635 01:12:24 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:25:12.635 01:12:24 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:25:12.635 01:12:24 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:25:12.635 01:12:24 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:25:12.635 01:12:24 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:25:12.635 01:12:24 -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:12.635 01:12:24 -- common/autotest_common.sh@10 -- # set +x 00:25:12.635 01:12:24 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:25:12.635 01:12:24 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:25:12.635 01:12:24 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:25:12.635 01:12:24 -- common/autotest_common.sh@10 -- # set +x 00:25:14.536 INFO: APP EXITING 00:25:14.536 INFO: killing all VMs 00:25:14.536 INFO: killing vhost app 00:25:14.536 INFO: EXIT DONE 00:25:15.912 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:25:15.912 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:25:15.912 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:25:15.912 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:25:15.912 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:25:15.912 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:25:15.912 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:25:15.912 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:25:15.912 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:25:15.912 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:25:15.912 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:25:15.912 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:25:15.912 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:25:15.912 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:25:15.912 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:25:15.912 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:25:15.912 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:25:17.288 Cleaning 00:25:17.288 Removing: /var/run/dpdk/spdk0/config 00:25:17.288 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:17.288 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:17.288 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:17.288 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:17.288 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:25:17.288 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:25:17.288 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:25:17.288 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:25:17.288 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:17.288 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:17.288 Removing: /var/run/dpdk/spdk1/config 00:25:17.288 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:17.288 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:17.288 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:17.288 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:17.288 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:25:17.288 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:25:17.288 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:25:17.288 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:25:17.288 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:17.288 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:17.288 Removing: /var/run/dpdk/spdk1/mp_socket 00:25:17.288 Removing: /var/run/dpdk/spdk2/config 00:25:17.288 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:17.288 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:17.288 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:17.288 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:17.288 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:25:17.288 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:25:17.288 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:25:17.288 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:25:17.288 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:17.288 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:17.288 Removing: /var/run/dpdk/spdk3/config 00:25:17.288 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:17.288 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:17.288 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:17.288 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:17.288 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:25:17.288 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:25:17.288 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:25:17.288 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:25:17.288 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:17.288 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:17.288 Removing: /var/run/dpdk/spdk4/config 00:25:17.288 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:17.288 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:17.288 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:17.288 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:17.288 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:25:17.288 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:25:17.288 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:25:17.288 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:25:17.288 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:17.288 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:17.288 Removing: /dev/shm/bdev_svc_trace.1 00:25:17.288 Removing: /dev/shm/nvmf_trace.0 00:25:17.288 Removing: /dev/shm/spdk_tgt_trace.pid1123307 00:25:17.288 Removing: /var/run/dpdk/spdk0 00:25:17.288 Removing: /var/run/dpdk/spdk1 00:25:17.288 Removing: /var/run/dpdk/spdk2 00:25:17.288 Removing: /var/run/dpdk/spdk3 00:25:17.288 Removing: /var/run/dpdk/spdk4 00:25:17.288 Removing: /var/run/dpdk/spdk_pid1121633 00:25:17.288 Removing: /var/run/dpdk/spdk_pid1122359 00:25:17.288 Removing: /var/run/dpdk/spdk_pid1123307 00:25:17.288 Removing: /var/run/dpdk/spdk_pid1123632 00:25:17.288 Removing: /var/run/dpdk/spdk_pid1124349 00:25:17.288 Removing: /var/run/dpdk/spdk_pid1124700 00:25:17.288 Removing: /var/run/dpdk/spdk_pid1125656 00:25:17.288 Removing: /var/run/dpdk/spdk_pid1125793 00:25:17.288 Removing: /var/run/dpdk/spdk_pid1126177 00:25:17.288 Removing: /var/run/dpdk/spdk_pid1127490 00:25:17.288 Removing: /var/run/dpdk/spdk_pid1128413 00:25:17.288 Removing: /var/run/dpdk/spdk_pid1128721 00:25:17.288 Removing: /var/run/dpdk/spdk_pid1128912 00:25:17.288 Removing: /var/run/dpdk/spdk_pid1129177 00:25:17.288 Removing: /var/run/dpdk/spdk_pid1129441 00:25:17.288 Removing: /var/run/dpdk/spdk_pid1129602 00:25:17.288 Removing: /var/run/dpdk/spdk_pid1129858 00:25:17.288 Removing: /var/run/dpdk/spdk_pid1130059 00:25:17.288 Removing: /var/run/dpdk/spdk_pid1130516 00:25:17.288 Removing: /var/run/dpdk/spdk_pid1132869 00:25:17.288 Removing: /var/run/dpdk/spdk_pid1133162 00:25:17.288 Removing: /var/run/dpdk/spdk_pid1133322 00:25:17.288 Removing: /var/run/dpdk/spdk_pid1133460 00:25:17.288 Removing: /var/run/dpdk/spdk_pid1133771 00:25:17.288 Removing: /var/run/dpdk/spdk_pid1133894 00:25:17.288 Removing: /var/run/dpdk/spdk_pid1134207 00:25:17.288 Removing: /var/run/dpdk/spdk_pid1134331 00:25:17.288 Removing: /var/run/dpdk/spdk_pid1134500 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1134592 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1134800 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1134916 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1135302 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1135462 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1135659 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1135961 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1135982 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1136173 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1136326 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1136603 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1136761 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1136918 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1137198 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1137354 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1137526 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1137784 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1137951 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1138182 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1138386 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1138538 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1138814 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1138972 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1139132 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1139402 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1139569 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1139728 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1140006 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1140170 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1140353 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1140562 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1143046 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1172662 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1175566 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1182842 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1186557 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1189459 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1189870 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1197939 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1197948 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1198994 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1199648 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1200311 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1200712 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1200715 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1200898 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1200988 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1200990 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1201653 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1202303 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1202847 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1203251 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1203369 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1203515 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1204397 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1205120 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1210896 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1211172 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1214103 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1218095 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1220263 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1227397 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1234056 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1235365 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1236036 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1247235 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1249741 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1252945 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1254125 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1255448 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1255468 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1255608 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1255873 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1256316 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1257634 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1258502 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1258817 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1260432 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1260984 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1261552 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1264473 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1271589 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1274282 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1278423 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1279509 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1280610 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1283574 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1286228 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1291275 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1291283 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1294564 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1294733 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1294867 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1295141 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1295261 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1298177 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1298515 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1301471 00:25:17.547 Removing: /var/run/dpdk/spdk_pid1303460 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1307904 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1311516 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1318175 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1323082 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1323146 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1336209 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1336739 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1337282 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1337687 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1338264 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1338689 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1339217 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1339744 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1342915 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1343427 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1347633 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1347816 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1349415 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1354894 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1355015 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1358436 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1359722 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1361240 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1361991 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1363479 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1364277 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1370238 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1370556 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1370948 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1372602 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1373000 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1373283 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1376543 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1376678 00:25:17.806 Removing: /var/run/dpdk/spdk_pid1378149 00:25:17.806 Clean 00:25:17.806 01:12:30 -- common/autotest_common.sh@1447 -- # return 0 00:25:17.806 01:12:30 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:25:17.806 01:12:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:17.806 01:12:30 -- common/autotest_common.sh@10 -- # set +x 00:25:17.806 01:12:30 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:25:17.806 01:12:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:17.806 01:12:30 -- common/autotest_common.sh@10 -- # set +x 00:25:17.806 01:12:30 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:25:17.806 01:12:30 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:25:17.806 01:12:30 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:25:17.806 01:12:30 -- spdk/autotest.sh@387 -- # hash lcov 00:25:17.806 01:12:30 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:25:17.806 01:12:30 -- spdk/autotest.sh@389 -- # hostname 00:25:17.806 01:12:30 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:25:18.064 geninfo: WARNING: invalid characters removed from testname! 00:25:50.167 01:12:57 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:25:50.167 01:13:01 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:25:51.541 01:13:03 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:25:54.822 01:13:06 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:25:57.348 01:13:09 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:26:00.627 01:13:12 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:26:03.155 01:13:15 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:26:03.155 01:13:15 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:03.155 01:13:15 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:26:03.155 01:13:15 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:03.155 01:13:15 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:03.155 01:13:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.155 01:13:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.155 01:13:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.155 01:13:15 -- paths/export.sh@5 -- $ export PATH 00:26:03.155 01:13:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.155 01:13:15 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:26:03.155 01:13:15 -- common/autobuild_common.sh@437 -- $ date +%s 00:26:03.155 01:13:15 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715728395.XXXXXX 00:26:03.155 01:13:15 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715728395.xOmz50 00:26:03.155 01:13:15 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:26:03.155 01:13:15 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:26:03.155 01:13:15 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:26:03.155 01:13:15 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:26:03.155 01:13:15 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:26:03.155 01:13:15 -- common/autobuild_common.sh@453 -- $ get_config_params 00:26:03.155 01:13:15 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:26:03.155 01:13:15 -- common/autotest_common.sh@10 -- $ set +x 00:26:03.155 01:13:15 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:26:03.155 01:13:15 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:26:03.155 01:13:15 -- pm/common@17 -- $ local monitor 00:26:03.155 01:13:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:03.155 01:13:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:03.155 01:13:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:03.155 01:13:15 -- pm/common@21 -- $ date +%s 00:26:03.155 01:13:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:03.155 01:13:15 -- pm/common@21 -- $ date +%s 00:26:03.155 01:13:15 -- pm/common@25 -- $ sleep 1 00:26:03.155 01:13:15 -- pm/common@21 -- $ date +%s 00:26:03.155 01:13:15 -- pm/common@21 -- $ date +%s 00:26:03.155 01:13:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715728395 00:26:03.155 01:13:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715728395 00:26:03.155 01:13:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715728395 00:26:03.155 01:13:15 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715728395 00:26:03.155 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715728395_collect-vmstat.pm.log 00:26:03.155 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715728395_collect-cpu-load.pm.log 00:26:03.155 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715728395_collect-cpu-temp.pm.log 00:26:03.155 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715728395_collect-bmc-pm.bmc.pm.log 00:26:04.095 01:13:16 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:26:04.095 01:13:16 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:26:04.095 01:13:16 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:26:04.095 01:13:16 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:26:04.095 01:13:16 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:26:04.095 01:13:16 -- spdk/autopackage.sh@19 -- $ timing_finish 00:26:04.095 01:13:16 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:26:04.095 01:13:16 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:26:04.095 01:13:16 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:26:04.095 01:13:16 -- spdk/autopackage.sh@20 -- $ exit 0 00:26:04.095 01:13:16 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:26:04.095 01:13:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:26:04.095 01:13:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:26:04.095 01:13:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:04.095 01:13:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:26:04.095 01:13:16 -- pm/common@44 -- $ pid=1387321 00:26:04.095 01:13:16 -- pm/common@50 -- $ kill -TERM 1387321 00:26:04.095 01:13:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:04.095 01:13:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:26:04.095 01:13:16 -- pm/common@44 -- $ pid=1387323 00:26:04.095 01:13:16 -- pm/common@50 -- $ kill -TERM 1387323 00:26:04.095 01:13:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:04.095 01:13:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:26:04.095 01:13:16 -- pm/common@44 -- $ pid=1387325 00:26:04.095 01:13:16 -- pm/common@50 -- $ kill -TERM 1387325 00:26:04.095 01:13:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:04.095 01:13:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:26:04.095 01:13:16 -- pm/common@44 -- $ pid=1387359 00:26:04.095 01:13:16 -- pm/common@50 -- $ sudo -E kill -TERM 1387359 00:26:04.095 + [[ -n 1035976 ]] 00:26:04.095 + sudo kill 1035976 00:26:04.106 [Pipeline] } 00:26:04.124 [Pipeline] // stage 00:26:04.129 [Pipeline] } 00:26:04.142 [Pipeline] // timeout 00:26:04.146 [Pipeline] } 00:26:04.159 [Pipeline] // catchError 00:26:04.163 [Pipeline] } 00:26:04.178 [Pipeline] // wrap 00:26:04.184 [Pipeline] } 00:26:04.198 [Pipeline] // catchError 00:26:04.206 [Pipeline] stage 00:26:04.207 [Pipeline] { (Epilogue) 00:26:04.231 [Pipeline] catchError 00:26:04.232 [Pipeline] { 00:26:04.243 [Pipeline] echo 00:26:04.244 Cleanup processes 00:26:04.249 [Pipeline] sh 00:26:04.534 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:26:04.534 1387461 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:26:04.534 1387587 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:26:04.548 [Pipeline] sh 00:26:04.847 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:26:04.847 ++ grep -v 'sudo pgrep' 00:26:04.847 ++ awk '{print $1}' 00:26:04.847 + sudo kill -9 1387461 00:26:04.859 [Pipeline] sh 00:26:05.142 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:13.284 [Pipeline] sh 00:26:13.570 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:13.570 Artifacts sizes are good 00:26:13.584 [Pipeline] archiveArtifacts 00:26:13.591 Archiving artifacts 00:26:13.781 [Pipeline] sh 00:26:14.063 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:26:14.076 [Pipeline] cleanWs 00:26:14.086 [WS-CLEANUP] Deleting project workspace... 00:26:14.086 [WS-CLEANUP] Deferred wipeout is used... 00:26:14.092 [WS-CLEANUP] done 00:26:14.094 [Pipeline] } 00:26:14.113 [Pipeline] // catchError 00:26:14.124 [Pipeline] sh 00:26:14.405 + logger -p user.info -t JENKINS-CI 00:26:14.413 [Pipeline] } 00:26:14.427 [Pipeline] // stage 00:26:14.432 [Pipeline] } 00:26:14.450 [Pipeline] // node 00:26:14.456 [Pipeline] End of Pipeline 00:26:14.509 Finished: SUCCESS